Abstract
Virtual health agents (VHAs) have received considerable attention, but the early focus has been on collecting data, helping patients follow generic health guidelines, and providing reminders for clinical appointments. While presenting the collected data and frequency of visits to the clinician is useful, further context and personalization are needed for a VHA to interpret and understand what the data means in clinical terms. This has made their use in managing health limited. Such understanding enables patient empowerment and self-appraisal -- i.e., aiding the patient in interpreting the data to understand the changes in the patient’s health conditions, and self-management -- i.e., to help a patient better manage their health through better adherence to the clinician guidelines and clinician recommended care plan. Crisis conditions such as the current pandemic have further stressed our healthcare system and have made the need for such advanced support more attractive and in demand. Consider the rapid growth in mental health because the patients who already had mental health conditions worsen, and many develop such conditions due to the challenges arising from lockdown, isolation, and economic hardships. The severe lack of timely availability of clinical expertise to meet the rapidly growing demand provides the motivation for advancing this research in developing more advanced VHAs and evaluating it in the context of mental health management.
Intellectual Contributions of this tutorial
This tutorial seeks to showcase AI strategies that provide medical context to patient data with the help of a knowledge graph. This supports personalization through a personalized knowledge graph that captures the patient’s personalized health management objectives within the context of the clinical guidelines and care plan. The continuous capture of this information through the analysis of patient-VHA interactions, and the strategy of creating engaging interactions (conversations) can further augment the personalized knowledge graph. These operations are required to support self-appraisal and self-management, and when necessary perform fail-safe tasks such as connecting the patient to a crisis help-line or professional help. The core innovation is the use of a novel knowledge-infused reinforcement learning method. The by-product of this approach leads to transparency in decision-making with the ability to offer a user understandable explanation.
Tutorial Focus
We combine the qualities of Knowledge infused Learning [1-5, 7], Reinforcement Learning [4,5], and advanced use of Knowledge Graphs [5,8-9] (e.g., through the incorporation of the process knowledge) to develop a VHA which will have transparency in decision making due to the use of knowledge and generate explanations in terms that the user can understand. The transparency of the system will also allow component-wise understanding of the model which the model developers can benefit from as well [6]. Finally, explicitly using the users' feedback to reiterate and revise the user-level personalized knowledge graph, the model and quality of explanations can deliver a VHA that clinicians and patients can use in practice to help their healthcare needs. Following two tutorial’s focus modules define a cohesive component-level architecture of VHA:
Personalized Knowledge Graph (PKG) construction: Figure 1 is a component-level modular architecture for constructing knowledge graphs. During the tutorial, we will provide details on constructing a knowledge graph (KG), which we acquired from our extensive body of research at the intersection of KG, natural language processing/understanding, and artificial intelligence. This tutorial will adapt the flow of information illustrated in the following architecture for constructing PKG.
Knowledge-infused Reinforcement Learning using PKG: to predict the next high-level action, such as determining the appropriate question for interacting with a specific patient (e.g., getting a mental health status check, measuring the safety of information in PKG and interactions, asking clinical validation questions, etc) while incorporating patient feedback and domain (clinical) knowledge (Figure 2).
Tutorial Organization
Knowledge-infused Learning (KiL): KiL a form of neuro-symbolic AI is a novel paradigm that seeks to incorporate a variety of explicit (symbolic) knowledge into a data-driven statistical AI framework that supports advancement in machine intelligence. Utilizing knowledge and data in deep learning models to enable learning from lower-level syntactic and lexical features from data through statistical (deep) learning as well as higher-level concepts from knowledge. Using knowledge also allows greater transparency in the decision-making, and enables explanations that users need for informed decision making.
Anatomy of KiL
Shallow Infusion: The knowledge is converted to an embedding vector and is concatenated to the data vector before being passed into the traditional deep learning pipeline. This method is fast and scalable and while it shows improvements over only using the data, the compression of knowledge in a vector-based embedding loses a lot of the semantic and relational level information.
Semi-Deep Infusion: This technique guides the parameter learning process of the deep learning model using the knowledge. Such a process ensures the parameters that govern data patterns are somewhat in alignment with the knowledge being infused. However, parametric models are layers and layers deep, representing different levels of abstraction, Therefore what kind of knowledge to guide the parameters of which specific layers remains an open question.
Deep Infusion: This technique guides the parameters of each layer of the deep learning model utilizing stratified layers of abstraction in the knowledge graph, thus providing the optimal level of knowledge infusion.
KiL for End User-level Explainability: Several methods to explain the outcomes of deep learning models have been proposed through the years, broadly classified as explainable AI (XAI). Similar to how a debugger understands the exceptions thrown by the program errors, XAI explanations can be thought of as a stack trace that the developer of the system can understand by not the end-user and application/domain expert. XAI explanations make sense to deep learning model debuggers and computer scientists that develop the models. However, to end-users like the clinicians, explanations that are meaningful at the application or domain levels (e.g., compliance with clinical guidelines) are required. As an example, visualization attention models to predict suicidality tells the computer scientist that the model and its components are functioning seemingly well. To the clinician, it still doesn't explain it through a line of reasoning that they can comprehend. For instance, considering the decision making based on Columbia Suicide Severity Rating Scale (C-SSRS), it is important for a model to explain its outcome based on the flow of questions in C-SSRS. This would help clinicians assimilate model outcomes and help them make a conscious decision.
Knowledge-infused Reinforcement learning (KiRL): Reinforcement Learning trains the VHA to understand the patient through trial and error based correction using directly the patient feedback. This adds a significant level of personalization based on the continuous user-feedback-based correction that is not present in supervised learning methods.
Why Reinforcement learning is an alternative way for Human-AI collaboration - Reinforcement Learning trains the VHA to understand the patient through trial and error based correction using directly the patient feedback. This adds a significant level of personalization based on the user-feedback-based correction that is not present in supervised learning methods.
KiRL: We combine the qualities of the aforementioned Knowledge infused Learning, Reinforcement Learning, to develop a VHA which will have transparency in decision making due to the use of knowledge and generate explanations in terms that the user can understand i.e. User level explanations as an upgrade from the traditional XAI based explanations. The transparency of the system will also allow component-wise understanding of the model which the model developers can benefit from as well. Finally, explicitly using the user's feedback to reiterate and revise the user-level personalized knowledge graph, the model and quality of explanations can deliver a VHA that clinicians and patients can use in practice to help their healthcare needs.
Length of the Tutorial
We plan a 1-hour lecture-style tutorial with 1 break (5-10 mins for Questions and Answers). Following it, we plan to have 30 minutes of hands-on practice. (Total: 1 hour 30 minutes)
Expected background and prerequisite of audience
The tutorial would be a mix of lecture-style and hands-on in the python programming language. The audience is expected to have a basic understanding of deep/machine learning, natural language processing, and semantic technologies (e.g., linked open data). We aim to guide attendees through a high-level tour of the most recent approaches proposed by researchers. Also, we expect basic familiarity with social media platforms such as Twitter and Reddit. We expect participants to bring their laptops with all the required tools installed. Details on tools needed and the background material will be provided upon acceptance of the tutorial proposal. We expect that by the end of the tutorial, the attendees will understand the use of knowledge graphs to enhance the performance (quality of results), utility, interpretability, and explainability of deep learning and be prepared to apply knowledge-infused deep learning to real-world applications.
Presenters' Biographies
Artificial Intelligence Institute, University of South Carolina
He is a Ph.D. student at AIISC. He completed his master's in computer science at Indiana University Bloomington and has worked at UT Dallas’s starling lab. His research interests include statistical relational artificial intelligence, sequential decision making, knowledge graphs, and reinforcement learning. His work is published in reputed conferences in IEEE, KR, AAAI, AAMAS, and ECML.
Artificial Intelligence Institute, University of South Carolina
Manas Gaur is a Ph.D. candidate at the Artificial Intelligence Institute and a visiting researcher at Alan Turing Institute. Earlier, he has been a data science for social good fellow with the University of Chicago and an AI for social good fellow with Dataminr Inc. Manas's research at the interface of AI and Knowledge Graphs introduces a novel paradigm termed Knowledge-infused Learning (KiL). KiL has been proven to provide explainable and interpretable frameworks for conversational AI, domain adaptation, recommender systems, and learning to rank problems. Further, its tangible outcomes have been covered by many media outlets. Currently, his research focuses on applying KiL to mental healthcare, crisis informatics, digital security, and conversational assistance.
Artificial Intelligence Institute, University of South Carolina
Qi Zhang is an assistant professor of the Computer Science and Engineering department and the Artificial Intelligence Institute at the University of South Carolina. He received his Ph.D. from the University of Michigan in 2021. His research aims for solutions for coordinating systems of decision-making agents operating in uncertain, dynamic environments. As hand-engineered solutions for such environments often fall short, he used ideas from planning and reinforcement learning to develop and analyze algorithms that autonomously coordinate agents in an effective, trustworthy, and communication-efficient manner. In particular, he has been working on social commitments for trustworthy coordination, communication learning, and language emergence among coordinated agents and applications of (multi-agent) reinforcement learning in intelligent transportation systems, dialogue systems, and multi-robot systems.
Artificial Intelligence Institute, University of South Carolina
Prof. Amit Sheth is an Educator, Researcher and Entrepreneur. He is a fellow of the IEEE, AAAI, AAAS, and ACM. His awards include IEEE TCSVC Research Innovation Award, Trustee Award, 10-year award (Intl Semantic Web Conf), and Ohio Faculty Commercialization Award (runner up). He is among the top 50 CS authors in USA and among top 100 in the world according to Research.com. Three of the four companies he founded involved licensing his university research outcomes, including the first Semantic Web company in 1999 that pioneered technology similar to what is found today in Google Semantic Search and Knowledge Graph and the fourth company - Cognovi Labs at the intersection of emotion and AI. He is incredibly proud of his students exceptional success in academia, industry research labs, and as entrepreneurs.
References
Sheth, Amit, Manas Gaur, Kaushik Roy, and Keyur Faldu. "Knowledge-intensive language understanding for explainable ai." IEEE Internet Computing 25, no. 5 (2021): 19-24.
Sheth, Amit, Manas Gaur, Ugur Kursuncu, and Ruwan Wickramarachchi. "Shades of knowledge-infused learning for enhancing deep learning." IEEE Internet Computing 23, no. 6 (2019): 54-63.
Gaur, Manas, Keyur Faldu, and Amit Sheth. "Semantics of the black-box: Can knowledge graphs help make deep learning systems more interpretable and explainable?." IEEE Internet Computing 25, no. 1 (2021): 51-59.
Roy, Kaushik, Qi Zhang, Manas Gaur, and Amit Sheth. "Knowledge infused policy gradients with upper confidence bound for relational bandits." In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 35-50. Springer, Cham, 2021.
Gaur, Manas, Kalpa Gunaratna, Vijay Srinivasan, and Hongxia Jin. "ISEEQ: Information Seeking Question Generation using Dynamic Meta-Information Retrieval and Knowledge Graphs." arXiv preprint arXiv:2112.07622 (2021).
Goebel, Randy, Ajay Chander, Katharina Holzinger, Freddy Lecue, Zeynep Akata, Simone Stumpf, Peter Kieseberg, and Andreas Holzinger. "Explainable AI: the new 42?." In International cross-domain conference for machine learning and knowledge extraction, pp. 295-303. Springer, Cham, 2018.
Kursuncu, Ugur, Manas Gaur, and Amit Sheth. "Knowledge infused learning (k-il): Towards deep incorporation of knowledge in deep learning." arXiv preprint arXiv:1912.00512 (2019).
Gyrard, Amelie, Manas Gaur, Saeedeh Shekarpour, Krishnaprasad Thirunarayan, and Amit Sheth. "Personalized Health Knowledge Graph." In CEUR workshop proceedings, vol. 2317, p. 5. 2018.
Li, Xiao-Hui, Caleb Chen Cao, Yuhan Shi, Wei Bai, Han Gao, Luyu Qiu, Cong Wang et al. "A survey of data-driven and knowledge-aware explainable ai." IEEE Transactions on Knowledge and Data Engineering 34, no. 1 (2020): 29-49.