The goal of this study is to craft empirically grounded, practical policy proposals that address tensions between ethical needs for explainability and legitimate commercial needs for trade secrecy raised by AI-based clinical decision software, including diagnostic, treatment, and predictive software. The study will examine both AI-enabled software that is regulated as a medical device by the FDA as well as unregulated software.
Explainability and Trade Secrecy in AI-Enabled Clinical Decision Software
Duke Law School
-
Arti K. Rai, Isha Sharma, and Christina Silcox, Accountability, secrecy, and innovation in AI-enabled clinical decision software, Journal of Law and the Biosciences, Nov 2020
Read more -
Christina Silcox et al., Trust, But Verify: Informational Challenges Surrounding AI-Enabled Clinical Decision Software, Duke University Margolis Center for Health Policy White Paper, Sept 2020
Read more