Artificial intelligence (AI) is moving rapidly from promise to practice in health care—from its application in diagnostics to ambient scribing. While these tools offer potential benefits, they also pose risks and raise ethical questions about issues such as governance, transparency, and equity.
Members of the Greenwall Faculty Scholars community have helped shape the conversation around the ethics of AI in health care, examining how AI should be developed and designed, implemented, regulated, and communicated with patients. Below, we highlight a sampling of recent work by Faculty Scholars and Alums addressing issues where bioethics can help guide AI’s use in health care.
Guidance for Responsible Use
“[T]hose involved in [AI’s] development and implementation can operate in silos, having limited insight into one another’s practices,” Michelle Mello, JD, PhD, and colleagues note at the onset of their paper, Building Consensus for Responsible AI in Healthcare, published in The American Journal of Bioethics (AJOB). The result is “a fragmented landscape of guidance and expectations.” The authors examine the goals, challenges, and strategies for building consensus around AI guidelines in health care, drawing on early lessons from the Coalition for Health AI (CHAI), a clinician-led group that includes stakeholders from industry, health systems, philanthropy, and other sectors. The authors conclude with a key insight: “When forged through diverse perspectives and tested in real contexts, guidelines gain both legitimacy and adaptability, allowing them to respond effectively to real-world challenges.”
Following CHAI’s guidance on the responsible use of AI in health care—created in collaboration with the Joint Commission (TJC), the primary U.S. hospital accreditor—I. Glenn Cohen, JD, and coauthors published a JAMA viewpoint, New Guidance on Responsible Use of AI. They praise CHAI and TJC’s guidance as “an important framework aiming for enhanced patient safety, improved patient outcomes, stronger data protection, increased trust, and greater operational efficiency,” but nonetheless call attention to the barriers to operationalizing the guidance for monitoring AI performance and outcomes, noting it “may be aspirational.” “[M]any [U.S. hospitals] operate in rural settings with resource constraints (eg, lack of expanded Medicaid coverage),” they write. “Conducting an independent bias audit and risk analysis might be beyond their expertise and resources.”
Law and Regulation
In their JAMA Health Forum publication, Role of the States in the Future of AI Regulation, Prof. Mello, Jessica L. Roberts, JD, and coauthor, Peter B. Childs, examine the growing role of U.S. states in regulating AI in health care. State lawmakers have introduced dozens of bills, for example, addressing insurers’ use of AI, patient disclosure, and algorithmic discrimination. The authors note that while a patchwork of state laws is imperfect, it can help surface effective approaches and fill regulatory gaps in the absence of federal protections. In addition to steps already taken, the authors recommend that states require healthcare organizations and insurers to adopt an AI governance process to better promote AI safety.
In NEJM AI, Jennifer Blumenthal-Barby, PhD, and coauthor, Nicholas Peoples, MD, write about the Dual Public Health and Regulatory Dilemmas of “Relational” Artificial Intelligence. Relational AI is designed to simulate emotional support, companionship, or intimacy. Recognizing both its potential therapeutic benefits as well as its risk for “emotional dependency, reinforced delusions, addictive behaviors, and encouragement of self-harm,” Prof. Blumenthal-Barby and Dr. Peoples stress the importance of prioritizing research, youth protections—which could include requiring age verification with “more tightly regulated AI content”—accountability mechanisms, and public education, as well as proactively counseling patients.
Prof. Cohen and coauthor, Julian De Freitas, PhD, look at a specific California law regulating AI “companion chatbots”—the first law of its kind in the U.S.— along with its positive contributions toward making companion chatbots safer for users as well as its limits and how it can be improved. “[I]f the aim [of the California law] is truly to reduce suicide risk and protect mental health, this should be seen as a foundation, not a finish line,” the authors write in Mitigating Suicide Risk for Minors Involving AI Chatbots—A First in the Nation Law, published in JAMA. “The next challenge is ensuring that disclosure, safety protocols, and emotional safeguards are designed with evidence—not optics or box checking—in mind.”
Disclosure and Transparency
The spread of AI in health care raises questions about what patients should be told about AI tools. While patients generally want to know if AI influences their care, Prof. Mello and colleagues caution in their JAMA-published perspective, Ethical Obligations to Inform Patients About Use of AI Tools, that disclosing every tool is neither practical nor always in patients’ interests, as it risks overwhelming them and eroding trust. They propose a framework for patient disclosure based on two factors: the risk of physical harm posed by an AI tool and the extent to which patients can exercise agency in response to a disclosure. The team further concludes that, even when disclosure is not required at the point of care, healthcare organizations should engage in organizational transparency about AI use, and involve patients in AI governance processes.
Kayte Spector-Bagdady, JD, and coauthor Alex John London, PhD, penned an AJOB editorial, Disclosure as Absolution in Medicine: Disentangling Autonomy from Beneficence and Justice in Artificial Intelligence, referencing a 2024 AJOB article by Prof. Blumenthal-Barby and colleagues on patient consent and the right to notice and explanation for AI systems in health care. Prof. Spector-Bagdady and Prof. London argue that the adoption of AI in medicine can’t rely solely on informed consent, noting that informed consent is not a substitute for rigorous standards to ensure the safe, effective, and equitable deployment of AI. Instead, they argue that only once safety, efficacy, and equity are established is it then “appropriate to differentiate between choices to which patients should consent versus those that are imbedded within the clinical environment and affect the functioning of the health system for many patients.”
In the coming weeks, we will follow up with a blog spotlighting AI scholarship by our Making a Difference grantees. For further reading on ethics issues related to AI in health care from members of our Faculty Scholars Program community, check out these additional publications from the past year:
- I. Glenn Cohen, et al., AI Therapists vs Companions: Wellness, Licensure, and Liability, The American Journal of Bioethics, February 12, 2026
- I. Glenn Cohen, et al., Preemption at the Intersection of Health Care and Artificial Intelligence, JAMA, February 11, 2026
- I. Glenn Cohen, et al., Driving AI Health Innovation through the European Health Data Space: Opportunities and Challenges for Non-EU Country Participation, NEJM AI, February 6, 2026
- Doni Bloomfield, JD, et al., Biological Data Governance in an Age of AI, Science, February 5, 2026
- Barbara Evans, PhD, JD, et al., The Missing Dimension in Clinical AI: Making Hidden Values Visible, NEJM AI, January 22, 2026
- Michelle Mello, et. al., Designing Clinically Useful AI: A Blueprint for Impact, NEJM AI, January 22, 2026
- Alex Smith, MD, et al., Letter: Machine Learning Can Assist Surrogate Decision-Makers, NEJM AI, January 22, 2026
- Michelle Mello, et al, The AI Arms Race In Health Insurance Utilization Review: Promises Of Efficiency And Risks Of Supercharged Flaws, Health Affairs, January 6, 2026
- Nicholas G. Evans, PhD, et al., Horizon Scan of Emerging Issues at the Intersection of National Security, Artificial Intelligence, and Human Performance Enhancement, Science and Engineering Ethics, December 3, 2025
- Barbara Evans, et al., Biomedical Data Repositories Require Governance for Artificial Intelligence/Machine Learning Applications at Every Step, JAMIA Open, December 1, 2025
- I. Glenn Cohen, Michelle Mello, Kayte Spector-Bagdady, et al, AI, Health, and Health Care Today and Tomorrow: The JAMA Summit Report on Artificial Intelligence, JAMA, October 13, 2025
- Jason Karlawish, MD, et al., Demographic Data Supporting FDA Authorization of AI Devices for Alzheimer Disease and Related Dementias, JAMA, July 30, 2025
- I. Glenn Cohen, et al., Medical AI and Clinician Surveillance — The Risk of Becoming Quantified Workers, The New England Journal of Medicine, June 14, 2025
- I. Glenn Cohen, et al., Unregulated Emotional Risks of AI Wellness Apps, Nature Machine Intelligence, June 6, 2025