Trust in the AI systems remains the biggest impediment in their adoption; 91% of companies would like to have explainable AI, a key capability complimenting ethics to make AI trustworthy. To address this, my dissertation develops a novel paradigm of Knowledge-infused Learning (K-iL) that exploits domain knowledge and application semantics to enhance existing machine learning methods by infusing relevant conceptual information into a statistical, data-driven computational approach. By highlighting the challenges and opportunities in public health and crisis response, I discuss three specific strategies of infusion: (a) Shallow infusion in the form of vectorized representation of knowledge, (b) Semi-Deep infusion in the form of attention, knowledge-based optimization, and constraint satisfaction, and (c) Deep infusion by incorporating knowledge at different levels of abstractions in the latent layers of ML/DL. Further, I describe new theories, new evaluation metrics, and natural data sets constructed to prove and show the performance gains explainability in machine learning by combining the strengths of symbolic and statistical AI.
The artificial intelligence (AI) systems do not incorporate high-level reasoning and understanding about causality. As a result, the systems do not form the kinds of semantic representations and inferences that humans can. Enriching the AI systems to incorporate causality enabled with counterfactual reasoning is a challenging task due to a lack of richer causality representation. My research focuses on developing, CausalKG, a richer representation for causality using knowledge graphs for better explainability with autonomous driving and healthcare applications. CausalKG can infuse existing KGs with causal knowledge of the domain to enable interventional and counterfactual reasoning. The advantage of constructing a CausalKG is the integration of causality in reasoning and prediction processes, such as the agent action understanding, planning, medical diagnosis process, etc. Such integration can improve the accuracy and reliability of existing AI algorithms by providing better causal and domain adaptable explainability of the outcome.
Autonomous systems are becoming an integrated part of human lives with applications ranging from smart robots to self-driving cars. One of the primary goals of autonomous systems is to make machines intelligent so they can sense and learn to function within changing environments. The room for error in the decision-making of these systems is significantly low, requiring a thorough understanding of the context. However, training machines with limited scenarios to function in the same way a human with lifelong experience would is an arduous task. My dissertation focuses on narrowing this gap in learning, using Knowledge Graphs as a potential knowledge source to capture human experience. Specifically, I am focused on building novel approaches to “infuse” the domain and task-specific knowledge within learning modules of autonomous systems, so they can have a better understanding of the context to function safely and effectively.
My interdisciplinary research seeks to develop AI techniques to inform public health policy decisions, with a focus on healthcare, addiction and mental health. I apply ML and NLP techniques augmented with external knowledge and Ontologies to analyze social media and other big data. My research also deals with human-in-the-loop-AI with knowledge aware learning to provide explainability on the results that matter to domain specialists. To further support this goal, my work aims to induce the importance of multi-task learning over diverse social media data where the models using supervised and unsupervised strategies learn from multimodal content.
As we eat to live, AI models are being incorporated into healthcare applications to nudge us towards healthy eating habits, to generate new recipes, to explore new flavor combinations, and also for food recommendations. While the required features for each task might vary, understanding the cooking process is essential to all of them. My research focuses on modelling the cooking process that can understand the cause and effect of this sequential process over ingredients. By understanding the cooking process along with ingredients and nutrition, health-specific recipes can be generated and classified, and new flavor combinations can be explored based on cooking style and ingredients. The cooking model can further aid users with food restriction by finding alternate ingredients which are highly subjective to the cooking process of a given recipe.
Knowledge Graphs (KGs) are increasingly used in a wide array of applications from search engines to chatbots and recommender systems. However, these applications often draw knowledge from interdisciplinary domains and operate under environments with siloed, incomplete, and heterogeneous knowledge. My research deals with multi-faceted KG aspects relating to contextualization, temporal, and personalization as well as mechanisms to derive new knowledge to support growing large-scale and maintainable KGs. By combining these evolutionary dimensions of KGs, it serves as a knowledge model to enable intelligent systems in dynamic, high-volume, and real-time domains (healthcare and finance). Concretely, I research on infusing domain experts’ knowledge with data-driven approaches to streamline the knowledge integrations and expansions both at the conceptual and data level.
Analogies play a crucial role in human understanding of concepts, by allowing them to understand and produce new concepts in the light of previously experienced/ understood concepts. This ability is specifically utilised in the domain of education. For example, in order to teach a student about the atomic structure, a teacher can create an analogy between the concept of the solar system (which the student is already familiar with) and atomic structure (new concept). I am intrigued by how natural language processing and understanding can help the process of analogy-making in a classroom setting to benefit the students and teachers. One of the most challenging issues in analogy-making is the limitation of currently developed language models to capture the meaning of language. I am interested in identifying gaps and limitations in language models with respect to capturing meaning and identifying knowledge-driven techniques to bridge these gaps in the context of analogy-making.
AI algorithms are used today in various real-life applications such as health, social media analysis, and recommendation systems. However, for adoption in the real world, the algorithms need effective communication between the external domain and process knowledge specific to the application and AI model features at appropriate levels of abstraction. Currently, I am developing these algorithms within a mental health virtual health assistant that needs to infuse clinical guidelines (domain) and triaging (process) knowledge in the AI method to enable trustworthy explainable decision-making and facilitate adoption in the real world.
“Communication - the human connection - is the key to personal and career success”. I was working on Natural Language Processing projects when I came across this quote by Paul J. Meyer. This made me take a step back and figure out how to really make artificial agents communicate better. When humans communicate - processing and understanding happen at scale and in real-time, thanks to our knowledge learnt over time. Although current systems can process language, they fail big time when it comes to understanding the context. At AIISC, we specialise in Knowledge Graphs. My current research involves looking at how external knowledge can aid in building efficient language understanding. The current state-of-the-art methods can very well differentiate the word “bank” when it comes to riverbank and bank, but fails to find the similarity between heart attack and myocardial infarction. I am currently exploring the use of knowledge-infused methodologies in order to attain a better understanding.
For years AI Algorithms have been giving optimal solutions to various problems such as Alpha Go, Chess, Rubik's Cube. But often, these solutions are not understandable by humans. I am working on simplifying the algorithms and making them human-understandable. Preliminary work towards that is solving Rubik's Cube, where explanations for the algorithm provided by DeepCubeA solver teach humans how to solve the cube using ILP. An enriched domain knowledge graph is a crucial component to extract conceptual information from and provide meaningful explanations. I am working on building a knowledge graph tool to create personalised and explainable knowledge graphs, which can be helpful in healthcare, smart manufacturing and several growing domains.