The goal of this study is to craft empirically grounded, practical policy proposals that address tensions between ethical needs for explainability and legitimate commercial needs for trade secrecy raised by AI-based clinical decision software, including diagnostic, treatment, and predictive software. The study will examine both AI-enabled software that is regulated as a medical device by the FDA as well as unregulated software.
Explainability and Trade Secrecy in AI-Enabled Clinical Decision Software
Duke Law School
-
Christina Silcox et al., Trust, But Verify: Informational Challenges Surrounding AI-Enabled Clinical Decision Software, Duke University Margolis Center for Health Policy White Paper, Sept 2020.
Read more