With the rise of chatbots like ChatGPT, artificial intelligence (AI) may feel like a recent development, but Greenwall grantees have been examining the ethical implications of health AI for nearly a decade. You can read about what members of the Faculty Scholars Program have recently written about AI in our previous blog on the topic. Here, we share three Making a Difference (MAD) projects that examine how health systems, clinicians, researchers, and developers can responsibly navigate the growing use of AI in health settings and the complex ethical questions it raises, for example, about transparency, accountability, and trust.
Ethical Considerations for Healthcare AI Development
For a 2018 MAD grant—which we discussed further in a 2022 blog—Arti Rai, JD, and colleagues sought to develop practical policy proposals addressing tensions between two competing concerns raised by AI-based clinical decision software: the ethical need for explainability and the legitimate commercial need for trade secrecy.
In their paper, Accountability, Secrecy, and Innovation in AI-enabled Clinical Decision Software, published in the Journal of Law and the Biosciences in 2020, Prof. Rai and coauthors outline the ways regulatory and legal frameworks could help “improve information flow without sacrificing innovation incentives.” They argue that low- to moderate-risk software would likely require less detailed information disclosure than high-risk software to achieve the requisite level of accountability. For high-risk software regulated by the U.S. Food and Drug Administration (FDA), the authors suggest that the full details of model development could be shared with the FDA, and the FDA could maintain the information as trade secret to help protect it from reproducibility by competitors. Additionally, the authors write that patent and state tort law could promote disclosure, information flow, and innovation.
With support from a 2018 MAD grant, Mildred Cho, PhD, and team aimed to identify potential barriers and enablers to the development of safe and ethical machine learning (ML) in health care.
For a 2024 AJOB Empirical Bioethics article, Moral Engagement and Disengagement in Health Care AI Development, Prof. Cho and coauthors analyzed interviews with developers of ML predictive analytics applications for health care and considered the developers’ perceptions regarding their responsibility to mitigate potential harms from ML. They found that developers varied widely in their perspectives on personal responsibility, with examples of both moral engagement and disengagement in a variety of forms. These findings suggest that regulatory approaches that rely on developers to recognize, accept, and act on responsibility for mitigating harms might be limited without stronger regulatory guidance and education. The authors suggest that developers’ thinking could be shifted toward moral engagement by integrating ethics education into computer science curricula with “real-life examples of ethical scenarios that AI developers face in daily practice[,]” incorporating “tools to enhance multiple perspective taking to align organizational values with those of individual developers, users such as clinicians, and patients[,]”and “rewarding leaders for modeling moral engagement.”
Integration of AI into Clinical Settings
For a 2021 MAD grant, Matthew DeCamp, MD, PhD, and colleagues worked to understand how patients interact with chatbots and the implications for transparency, autonomy, privacy, trust, and fairness in health care.
Dr. DeCamp and coauthors’ 2025 Journal of the American Medical Informatics Association paper, What Patients Want from Healthcare Chatbots: Insights from a Mixed-methods Study, sheds light on nuance in patient preferences for the types of tasks chatbots may help with, such as administrative tasks or discussion of sensitive health questions. They found patient-users were motivated to use chatbots for administrative tasks to save provider time, avoid unpleasant interactions, and due to the availability of the chatbot. For sensitive tasks, patient-users’ motivations were avoiding judgment and embarrassment and their perceived privacy and anonymity. In contrast, “Patients did generally prefer human involvement for tasks that require diagnostic expertise or genuine human empathy.”
Check out the list below for more publications from these MAD AI projects:
- Matthew DeCamp, et al., The Halo Effect: Perceptions of Information Privacy Among Healthcare Chatbot Users, Journal of the American Geriatrics Society, February 12, 2025
- Matthew DeCamp, et al., Patient Perceptions of Chatbot Supervision in Health Care Settings, JAMA Network Open, April 30, 2024
- Mildred Cho, et al., Developer Perspectives on Potential Harms of Machine Learning Predictive Analytics in Health Care: Qualitative Analysis, Journal of Medical Internet Research, November 16, 2023
- Matthew DeCamp, et al., More Than Just a Pretty Face? Nudging and Bias in Chatbots, Annals of Internal Medicine, June 6, 2023
- Mildred Cho, et al., A Typology of Existing Machine Learning–Based Predictive Analytic Tools Focused on Reducing Costs and Improving Quality in Health Care: Systematic Search and Content Analysis, Journal of Medical Internet Research, June 22, 2021
- Arti Rai, et al., Trust, But Verify: Informational Challenges Surrounding AI-Enabled Clinical Decision Software, Duke Margolis Center for Health Policy, September 17, 2020