- Programs Overview
- Admissions and Aid
- About Us
- Research
- Research Labs and Funded Projects
- Genomics Lab
- SACS Cybersecurity Research Center
- SACS X-Trust CPS Lab
- SACS Saha RESONANCE Lab
- Multimodal Environments for Twin Research & Immersive eXperiences
- Population Health Informatics and Disparities Research Lab (PHIDL)
- Intelligent Cognitive Profiling Lab (ICP)
- Geographic Information Systems and Visualization Lab
- Data-driven Intelligence and Security for Cyber-physical Systems (DISCS) Lab
- Clinical Trials and Outcomes Innovation Lab
- SACS Cyber-AI for Sustainable Planetary Ecosystem Resilience (CASPER)
- CAP: Capacity Building for Trustworthy AI in Medical Systems (TAIMS)
- Maternal Health Informatics and Disparities (HRSA Grant UR650342)
- Publications
- Collaborate with us
- Advanced Computing and Analytics Laboratory
- Faculty Grant Proposals
- Research Services
- Resources
- Research Labs and Funded Projects
- Departments
- News and Events
- Student Life
Research
About Artificial Intelligence in Medical Systems
AI and machine learning (ML) have made significant progress in medical systems and have achieved human-level performance in skin cancer classification, diabetic retinopathy detection, chest radiograph diagnosis, and the detection and treatment of sepsis. While these AI/ML achievements are encouraging and can lead to better treatment and diagnosis, few clinical AI solutions are deployed in hospitals or are actively utilized by physicians.
Existing clinical AI methods often have limitations in their development pipelines that can lead to inaccurate or inconsistent outcomes across different population groups. For example, many current models are trained on datasets that primarily represent specific demographic groups, which may result in reduced reliability when applied to other populations. Additionally, the opaque nature of many AI decision-making processes—the so-called “black box” problem—makes these systems vulnerable to cyberattacks and raises serious concerns about security and patient data privacy.
1. Explainable AI in medical systems
Artificial Intelligence in health care and medicine is expected to be increasingly critical, especially as a resource in areas such as research, diagnostics, and clinical practice. Many practitioners, clinicians, researchers, and patients may not trust or have the experience to use AI unless it is explainable, verifiable, and trustable.
Explainable AI (XAI), also known as Interpretable AI is artificial intelligence in which humans can understand the reasoning of decisions or predictions made by the AI. It contrasts with the “black box” concept in machine learning, where even the AI’s designers cannot explain why it arrived at a specific decision.
This research thrust will focus on the following activities:
- Develop XAI and interpretable methods for high dimensional, longitudinal and time-series medical data.
- Develop XAI methods to explain AI model failures.
- Develop novel XAI tools that can be applicable on various real-world health care datasets.
2. Ethical and Responsible AI in medical systems
Ethical and responsible AI deals with adherence of well-defined ethical guidelines regarding fundamental values, including individual rights, non-discrimination, and non-manipulation, along with legal regulations for ethical use of AI tools and technologies. The ethical use of AI tools is beneficial to society by producing cleaner products, reducing harmful environmental impacts, increasing public safety, and improving human health. But if used unethically, this will lead to disinformation, deception, human abuse, bias, prejudice, discrimination, and privacy concerns.
In medical systems, these ethical concerns must be considered seriously as they will directly impact people’s health as algorithmic and societal bias in the predictions lead to misdiagnosis, maltreatment, and disparities affecting the trustworthiness of AI systems.
The major goals of this specific research theme will be to:
- Develop clear guidelines on how to ethically and responsibly develop AI for medical systems.
- Develop clear guidelines on how to ethically and responsibly deploy AI for medical applications.
- Detect the ways AI can go wrong for medical systems, and bring it to the attention of AI developers and users.
- Develop approaches to detect and mitigate human/data-induced biases in the medical datasets and models.
3. Security and privacy-preserving AI in medical systems
Security in health care involves protecting electronic health records (EHRs), health tracking devices, medical equipment, and software used for health care delivery and management from unauthorized access, use and disclosure. There are three goals of security: protecting the confidentiality, integrity, and availability (CIA) of critical patient data, which, if compromised, could put patient lives at risk.
This research thrust will focus on these activities:
- Develop security and privacy-preserving methods with use-cases in health care.
- Develop AI/ML methods for identifying and predicting the security threats in health care.
- Privacy-preservation using federated learning in health care.
4. Use-inspired research in trustworthy AI in medical systems
The use-inspired research in medical systems combines techniques developed in the above research themes applies them to real-world use cases to build trustworthy AI systems. The major goals of this specific research theme are to:
- Develop a test bed to demonstrate a trustworthy AI system by leveraging existing techniques and combining them with methods developed in the project.
- Apply the developed test bed on a real-world medical application to demonstrate the utility of it.
- Provide the information to the diverse users based on the results of the developed test bed for demonstrating trustworthy AI in medical systems.
These goals focus on the following specific use cases to achieve the broader science goals of the AI Institute:
Use case #1: Dermatology: Develop explainable, secure and generalizable AI tools to apply on the real-world use cases for diagnosis and prediction of skin diseases.
Dermatologists diagnose a wide variety of skin diseases including skin cancers, inflammatory conditions like atopic dermatitis and psoriasis, and contagious diseases such as measles. Globally an estimated 3 billion people have inadequate access to medical care for skin diseases. Due to advancement in AI tools, we can detect skin diseases and identify individuals with potential skin diseases, however most of the existing tools are biased, lack diversity in the datasets, are non-explainable and have noisy diagnostic labels.
The large size of the imaging dataset used for building these AI tools causes security and privacy issues. This leads to ethical challenges and non-trustable AI solutions in medical systems that can’t be utilized by physicians for an accurate diagnosis and prognosis of skin diseases. Thus, there is a great need to develop ethical and trustworthy AI solutions in dermatology domain.
Use Case #2: Intensive care: Develop explainable, secure, and generalizable AI tools to apply on the real-world use cases for risk prediction in ICU setting patients.
Intensive care units provide care to patients who are critically or seriously ill or injured. More than 5 million patients are admitted annually to U.S. ICUs for intensive or invasive monitoring; support of airway, breathing, or circulation; stabilization of acute or life-threatening medical problems; comprehensive management of injury and/or illness; and maximization of comfort for dying patients.
Recently due to AI advancements, there has been some work on publicly available datasets to build models for clinical prediction tasks in intensive care such as in-hospital mortality, physiological decompensation, length of stay and phenotype classification. However, there are still challenges in bias and fairness in risk prediction models which hamper the successful adoption of medical AI tools. Thus, there is a great need to develop ethical and trustworthy AI solutions in intensive care unit research to build fair models for risk prediction tasks.
- Programs Overview
- Admissions and Aid
- About Us
- Research
- Research Labs and Funded Projects
- Genomics Lab
- SACS Cybersecurity Research Center
- SACS X-Trust CPS Lab
- SACS Saha RESONANCE Lab
- Multimodal Environments for Twin Research & Immersive eXperiences
- Population Health Informatics and Disparities Research Lab (PHIDL)
- Intelligent Cognitive Profiling Lab (ICP)
- Geographic Information Systems and Visualization Lab
- Data-driven Intelligence and Security for Cyber-physical Systems (DISCS) Lab
- Clinical Trials and Outcomes Innovation Lab
- SACS Cyber-AI for Sustainable Planetary Ecosystem Resilience (CASPER)
- CAP: Capacity Building for Trustworthy AI in Medical Systems (TAIMS)
- Maternal Health Informatics and Disparities (HRSA Grant UR650342)
- Publications
- Collaborate with us
- Advanced Computing and Analytics Laboratory
- Faculty Grant Proposals
- Research Services
- Resources
- Research Labs and Funded Projects
- Departments
- News and Events
- Student Life
