Access the full text.
Sign up today, get DeepDyve free for 14 days.
Helene Gerhards, Karsten Weber, U. Bittner, H. Fangerau (2020)
Machine Learning Healthcare Applications (ML-HCAs) Are No Stand-Alone Systems but Part of an Ecosystem – A Broader Ethical and Health Technology Assessment Approach is NeededThe American Journal of Bioethics, 20
(2019)
of Health and Human Services, US Food and Drug Administration, Center for Devices & Radiological Health
( Johnson KB , WeiWQ, WeeraratneD, et alPrecision medicine, AI, and the future of personalized health care [published online ahead of print, 2020 Sep 22]. Clin Transl Sci2021; 14 (1): 86–93.32961010)
Johnson KB , WeiWQ, WeeraratneD, et alPrecision medicine, AI, and the future of personalized health care [published online ahead of print, 2020 Sep 22]. Clin Transl Sci2021; 14 (1): 86–93.32961010Johnson KB , WeiWQ, WeeraratneD, et alPrecision medicine, AI, and the future of personalized health care [published online ahead of print, 2020 Sep 22]. Clin Transl Sci2021; 14 (1): 86–93.32961010, Johnson KB , WeiWQ, WeeraratneD, et alPrecision medicine, AI, and the future of personalized health care [published online ahead of print, 2020 Sep 22]. Clin Transl Sci2021; 14 (1): 86–93.32961010
R. Hunt, F. McKelvey (2019)
Algorithmic Regulation in Media and Cultural Policy: A Framework to Evaluate Barriers to AccountabilityJournal of Information Policy
James Wall, T. Krummel (2019)
The Digital Surgeon: How Big Data, Automation, and Artificial Intelligence Will Change Surgical Practice.Journal of pediatric surgery
( Mudgal KS , DasN. The ethical adoption of artificial intelligence in radiology. BJR Open2020; 2 (1): 20190020.33178959)
Mudgal KS , DasN. The ethical adoption of artificial intelligence in radiology. BJR Open2020; 2 (1): 20190020.33178959Mudgal KS , DasN. The ethical adoption of artificial intelligence in radiology. BJR Open2020; 2 (1): 20190020.33178959, Mudgal KS , DasN. The ethical adoption of artificial intelligence in radiology. BJR Open2020; 2 (1): 20190020.33178959
( Shi S , HeD, LiL, et alApplications of blockchain in ensuring the security and privacy of electronic health record systems: a survey. Comput Secur2020; 97: 101966–20.32834254)
Shi S , HeD, LiL, et alApplications of blockchain in ensuring the security and privacy of electronic health record systems: a survey. Comput Secur2020; 97: 101966–20.32834254Shi S , HeD, LiL, et alApplications of blockchain in ensuring the security and privacy of electronic health record systems: a survey. Comput Secur2020; 97: 101966–20.32834254, Shi S , HeD, LiL, et alApplications of blockchain in ensuring the security and privacy of electronic health record systems: a survey. Comput Secur2020; 97: 101966–20.32834254
( Payrovnaziri SN , ChenZ, Rengifo-MorenoP, et alExplainable artificial intelligence models using real-world electronic health record data: a systematic scoping review. J Am Med Inform Assoc2020; 27 (7): 1173–85.32417928)
Payrovnaziri SN , ChenZ, Rengifo-MorenoP, et alExplainable artificial intelligence models using real-world electronic health record data: a systematic scoping review. J Am Med Inform Assoc2020; 27 (7): 1173–85.32417928Payrovnaziri SN , ChenZ, Rengifo-MorenoP, et alExplainable artificial intelligence models using real-world electronic health record data: a systematic scoping review. J Am Med Inform Assoc2020; 27 (7): 1173–85.32417928, Payrovnaziri SN , ChenZ, Rengifo-MorenoP, et alExplainable artificial intelligence models using real-world electronic health record data: a systematic scoping review. J Am Med Inform Assoc2020; 27 (7): 1173–85.32417928
( Zieger A. Will Payers Use AI to Do Prior Authorization? And Will These AIs Make Things Better? Healthcare IT Today. December 27th, 2018. https://www.healthcareittoday.com/2018/12/27/will-payers-use-ai-to-do-prior-authorization-and-will-these-ais-make-things-better/ Accessed 22 October, 2020)
Zieger A. Will Payers Use AI to Do Prior Authorization? And Will These AIs Make Things Better? Healthcare IT Today. December 27th, 2018. https://www.healthcareittoday.com/2018/12/27/will-payers-use-ai-to-do-prior-authorization-and-will-these-ais-make-things-better/ Accessed 22 October, 2020Zieger A. Will Payers Use AI to Do Prior Authorization? And Will These AIs Make Things Better? Healthcare IT Today. December 27th, 2018. https://www.healthcareittoday.com/2018/12/27/will-payers-use-ai-to-do-prior-authorization-and-will-these-ais-make-things-better/ Accessed 22 October, 2020, Zieger A. Will Payers Use AI to Do Prior Authorization? And Will These AIs Make Things Better? Healthcare IT Today. December 27th, 2018. https://www.healthcareittoday.com/2018/12/27/will-payers-use-ai-to-do-prior-authorization-and-will-these-ais-make-things-better/ Accessed 22 October, 2020
( Andaur Navarro CL , DamenJ, TakadaT, et alProtocol for a systematic review on the methodological and reporting quality of prediction model studies using machine learning techniques. BMJ Open2020; 10 (11): e038832.)
Andaur Navarro CL , DamenJ, TakadaT, et alProtocol for a systematic review on the methodological and reporting quality of prediction model studies using machine learning techniques. BMJ Open2020; 10 (11): e038832.Andaur Navarro CL , DamenJ, TakadaT, et alProtocol for a systematic review on the methodological and reporting quality of prediction model studies using machine learning techniques. BMJ Open2020; 10 (11): e038832., Andaur Navarro CL , DamenJ, TakadaT, et alProtocol for a systematic review on the methodological and reporting quality of prediction model studies using machine learning techniques. BMJ Open2020; 10 (11): e038832.
S. Kasthurirathne, J. Vest, N. Menachemi, P. Halverson, S. Grannis (2018)
Assessing the capacity of social determinants of health data to augment predictive models identifying patients in need of wraparound social servicesJournal of the American Medical Informatics Association, 25
( Filice RW , RatwaniRM. The case for user-centered artificial intelligence in radiology. Radiology2020; 2 (3): e190095.33937824)
Filice RW , RatwaniRM. The case for user-centered artificial intelligence in radiology. Radiology2020; 2 (3): e190095.33937824Filice RW , RatwaniRM. The case for user-centered artificial intelligence in radiology. Radiology2020; 2 (3): e190095.33937824, Filice RW , RatwaniRM. The case for user-centered artificial intelligence in radiology. Radiology2020; 2 (3): e190095.33937824
T. Hernandez-Boussard, S. Bozkurt, J. Ioannidis, N. Shah (2020)
MINIMAR (MINimum Information for Medical AI Reporting): Developing reporting standards for artificial intelligence in health careJournal of the American Medical Informatics Association : JAMIA, 27
K. Fitzpatrick, Alison Darcy, Molly Vierhile (2017)
Delivering Cognitive Behavior Therapy to Young Adults With Symptoms of Depression and Anxiety Using a Fully Automated Conversational Agent (Woebot): A Randomized Controlled TrialJMIR Mental Health, 4
James Butcher, Irakli Beridze (2019)
What is the State of Artificial Intelligence Governance Globally?The RUSI Journal, 164
( Scherer MU. Regulating artificial intelligence systems: risks, challenges, competencies, and strategies. Harvard J Law Technol2015; 29: 353.)
Scherer MU. Regulating artificial intelligence systems: risks, challenges, competencies, and strategies. Harvard J Law Technol2015; 29: 353.Scherer MU. Regulating artificial intelligence systems: risks, challenges, competencies, and strategies. Harvard J Law Technol2015; 29: 353., Scherer MU. Regulating artificial intelligence systems: risks, challenges, competencies, and strategies. Harvard J Law Technol2015; 29: 353.
Ryan Crowley, Yuan Tan, J. Ioannidis (2020)
Empirical assessment of bias in machine learning diagnostic test accuracy studiesJournal of the American Medical Informatics Association : JAMIA
Xiaoqian Jiang, Yuan Wu, K. Marsolo, L. Ohno-Machado (2014)
Development of a Web Service for Analysis in a Distributed NetworkeGEMs, 2
( Zitnik M , AgrawalM, LeskovecJ. Modeling polypharmacy side effects with graph convolutional networks. Bioinformatics2018; 34 (13): i457–66.29949996)
Zitnik M , AgrawalM, LeskovecJ. Modeling polypharmacy side effects with graph convolutional networks. Bioinformatics2018; 34 (13): i457–66.29949996Zitnik M , AgrawalM, LeskovecJ. Modeling polypharmacy side effects with graph convolutional networks. Bioinformatics2018; 34 (13): i457–66.29949996, Zitnik M , AgrawalM, LeskovecJ. Modeling polypharmacy side effects with graph convolutional networks. Bioinformatics2018; 34 (13): i457–66.29949996
( Scott M. “In 2020, Global ‘Techlash’ Will Move from Words to Action.” POLITICO. 2019, https://www.politico.eu/article/tech-policy-competition-privacy-facebook-europe-techlash/ Accessed 22 Oct 2020)
Scott M. “In 2020, Global ‘Techlash’ Will Move from Words to Action.” POLITICO. 2019, https://www.politico.eu/article/tech-policy-competition-privacy-facebook-europe-techlash/ Accessed 22 Oct 2020Scott M. “In 2020, Global ‘Techlash’ Will Move from Words to Action.” POLITICO. 2019, https://www.politico.eu/article/tech-policy-competition-privacy-facebook-europe-techlash/ Accessed 22 Oct 2020, Scott M. “In 2020, Global ‘Techlash’ Will Move from Words to Action.” POLITICO. 2019, https://www.politico.eu/article/tech-policy-competition-privacy-facebook-europe-techlash/ Accessed 22 Oct 2020
( Beede E , BaylorE, HerschF, et al A human-centered evaluation of a deep learning system deployed in clinics for the detection of diabetic retinopathy. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. Honolulu, HI: Association for Computing Machinery; 2020: 1–12.)
Beede E , BaylorE, HerschF, et al A human-centered evaluation of a deep learning system deployed in clinics for the detection of diabetic retinopathy. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. Honolulu, HI: Association for Computing Machinery; 2020: 1–12.Beede E , BaylorE, HerschF, et al A human-centered evaluation of a deep learning system deployed in clinics for the detection of diabetic retinopathy. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. Honolulu, HI: Association for Computing Machinery; 2020: 1–12., Beede E , BaylorE, HerschF, et al A human-centered evaluation of a deep learning system deployed in clinics for the detection of diabetic retinopathy. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. Honolulu, HI: Association for Computing Machinery; 2020: 1–12.
( Tiku N. Google hired Timnit Gebru to be an outspoken critic of unethical AI. Then she was fired for it. Washington Post. 2020. https://www.washingtonpost.com/technology/2020/12/23/google-timnit-gebru-ai-ethics/ Accessed January 8, 2021)
Tiku N. Google hired Timnit Gebru to be an outspoken critic of unethical AI. Then she was fired for it. Washington Post. 2020. https://www.washingtonpost.com/technology/2020/12/23/google-timnit-gebru-ai-ethics/ Accessed January 8, 2021Tiku N. Google hired Timnit Gebru to be an outspoken critic of unethical AI. Then she was fired for it. Washington Post. 2020. https://www.washingtonpost.com/technology/2020/12/23/google-timnit-gebru-ai-ethics/ Accessed January 8, 2021, Tiku N. Google hired Timnit Gebru to be an outspoken critic of unethical AI. Then she was fired for it. Washington Post. 2020. https://www.washingtonpost.com/technology/2020/12/23/google-timnit-gebru-ai-ethics/ Accessed January 8, 2021
A. Chan, A. Darzi, Christopher Holmes, Christopher Yau, D. Moher, H. Ashrafian, J. Deeks, L. Ruffano, L. Faes, M. Calvert, P. Keane, Samantha Rivera, Sandra Vollmer, Xiaoxuan Liu, Aaron Lee, Adrian Jonas, A. Esteva, A. Beam, M. Panico, Cecilia Lee, C. Haug, Christopher Kelly, C. Mulrow, Cyrus Espinoza, J. Fletcher, D. Paltoo, Elaine Manna, G. Price, G. Collins, H. Harvey, J. Matcham, João Monteiro, L. Oakden-Rayner, M. Mccradden, Richard Savage, R. Golub, Rupa Sarkar, S. Rowley (2020)
Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension.The Lancet. Digital health, 2 10
Andreas Holzinger, M. Plass, M. Kickmeier-Rust, K. Holzinger, G. Crişan, C. Pintea, V. Palade (2018)
Interactive machine learning: experimental evidence for the human in the algorithmic loopApplied Intelligence, 49
S. Reddy, S. Allan, S. Coghlan, P. Cooper (2019)
A governance model for the application of AI in health careJournal of the American Medical Informatics Association : JAMIA
(2020)
Google hired Timnit Gebru to be an outspoken critic of unethical AI
( Pedersen M , VerspoorK, et alArtificial intelligence for clinical decision support in neurology. Brain Commun2020; 2 (2): fcaa096.33134913)
Pedersen M , VerspoorK, et alArtificial intelligence for clinical decision support in neurology. Brain Commun2020; 2 (2): fcaa096.33134913Pedersen M , VerspoorK, et alArtificial intelligence for clinical decision support in neurology. Brain Commun2020; 2 (2): fcaa096.33134913, Pedersen M , VerspoorK, et alArtificial intelligence for clinical decision support in neurology. Brain Commun2020; 2 (2): fcaa096.33134913
( Miake-Lye IM , DelevanDM, GanzDA, et alUnpacking organizational readiness for change: an updated systematic review and content analysis of assessments. BMC Health Serv Res2020; 20 (1): 106.32046708)
Miake-Lye IM , DelevanDM, GanzDA, et alUnpacking organizational readiness for change: an updated systematic review and content analysis of assessments. BMC Health Serv Res2020; 20 (1): 106.32046708Miake-Lye IM , DelevanDM, GanzDA, et alUnpacking organizational readiness for change: an updated systematic review and content analysis of assessments. BMC Health Serv Res2020; 20 (1): 106.32046708, Miake-Lye IM , DelevanDM, GanzDA, et alUnpacking organizational readiness for change: an updated systematic review and content analysis of assessments. BMC Health Serv Res2020; 20 (1): 106.32046708
K. Mudgal, Neelanjan Das (2019)
The ethical adoption of artificial intelligence in radiologyBJR Open, 2
(2018)
IBM’s Watson supercomputer recommended ‘unsafe and incorrect’ cancer treatments, internal documents show
( Hunt R , McKelveyF. Algorithmic regulation in media and cultural policy: a framework to evaluate barriers to accountability. J Inform Policy2019; 9: 307–35.)
Hunt R , McKelveyF. Algorithmic regulation in media and cultural policy: a framework to evaluate barriers to accountability. J Inform Policy2019; 9: 307–35.Hunt R , McKelveyF. Algorithmic regulation in media and cultural policy: a framework to evaluate barriers to accountability. J Inform Policy2019; 9: 307–35., Hunt R , McKelveyF. Algorithmic regulation in media and cultural policy: a framework to evaluate barriers to accountability. J Inform Policy2019; 9: 307–35.
(The Fourth Industrial Revolution: what it means and how to respond. World Economic Forum. 2016. https://www.weforum.org/agenda/2016/01/the-fourth-industrial-revolution-what-it-means-and-how-to-respond/ Accessed March 16, 2021)
The Fourth Industrial Revolution: what it means and how to respond. World Economic Forum. 2016. https://www.weforum.org/agenda/2016/01/the-fourth-industrial-revolution-what-it-means-and-how-to-respond/ Accessed March 16, 2021The Fourth Industrial Revolution: what it means and how to respond. World Economic Forum. 2016. https://www.weforum.org/agenda/2016/01/the-fourth-industrial-revolution-what-it-means-and-how-to-respond/ Accessed March 16, 2021, The Fourth Industrial Revolution: what it means and how to respond. World Economic Forum. 2016. https://www.weforum.org/agenda/2016/01/the-fourth-industrial-revolution-what-it-means-and-how-to-respond/ Accessed March 16, 2021
John Mongan, L. Moy, C. Kahn (2020)
Checklist for Artificial Intelligence in Medical Imaging (CLAIM): A Guide for Authors and Reviewers.Radiology. Artificial intelligence, 2 2
Carrie Cai, Emily Reif, Narayan Hegde, J. Hipp, Been Kim, D. Smilkov, M. Wattenberg, F. Viégas, G. Corrado, Martin Stumpe, Michael Terry (2019)
Human-Centered Tools for Coping with Imperfect Algorithms During Medical Decision-MakingProceedings of the 2019 CHI Conference on Human Factors in Computing Systems
( Roski J , GillinghamBL, JustE, BarrS, SohnE, SakarcanK. Implementing and scaling artificial intelligence solutions: considerations for policy makers and decision makers. Health Aff Blog. September 18, 2018; doi: 10.1377/hblog20180917.283077. https://www.healthaffairs.org/do/10.1377/hblog20180917.283077/full/ Accessed April 8, 2021.)
Roski J , GillinghamBL, JustE, BarrS, SohnE, SakarcanK. Implementing and scaling artificial intelligence solutions: considerations for policy makers and decision makers. Health Aff Blog. September 18, 2018; doi: 10.1377/hblog20180917.283077. https://www.healthaffairs.org/do/10.1377/hblog20180917.283077/full/ Accessed April 8, 2021.Roski J , GillinghamBL, JustE, BarrS, SohnE, SakarcanK. Implementing and scaling artificial intelligence solutions: considerations for policy makers and decision makers. Health Aff Blog. September 18, 2018; doi: 10.1377/hblog20180917.283077. https://www.healthaffairs.org/do/10.1377/hblog20180917.283077/full/ Accessed April 8, 2021., Roski J , GillinghamBL, JustE, BarrS, SohnE, SakarcanK. Implementing and scaling artificial intelligence solutions: considerations for policy makers and decision makers. Health Aff Blog. September 18, 2018; doi: 10.1377/hblog20180917.283077. https://www.healthaffairs.org/do/10.1377/hblog20180917.283077/full/ Accessed April 8, 2021.
Nidhi Rastogi, M. Gloria, J. Hendler (2015)
Security and Privacy of performing Data Analytics in the cloud - A three-way handshake of Technology, Policy, and ManagementArXiv, abs/1701.06828
( Cai CJ , WinterS, SteinerD, WilcoxL, TerryM. “ Hello AI”: uncovering the onboarding needs of medical practitioners for human-AI collaborative decision-making. Proc Acm Hum-Comput Interact2019; 3 (CSCW): 1–24. Article 104.34322658)
Cai CJ , WinterS, SteinerD, WilcoxL, TerryM. “ Hello AI”: uncovering the onboarding needs of medical practitioners for human-AI collaborative decision-making. Proc Acm Hum-Comput Interact2019; 3 (CSCW): 1–24. Article 104.34322658Cai CJ , WinterS, SteinerD, WilcoxL, TerryM. “ Hello AI”: uncovering the onboarding needs of medical practitioners for human-AI collaborative decision-making. Proc Acm Hum-Comput Interact2019; 3 (CSCW): 1–24. Article 104.34322658, Cai CJ , WinterS, SteinerD, WilcoxL, TerryM. “ Hello AI”: uncovering the onboarding needs of medical practitioners for human-AI collaborative decision-making. Proc Acm Hum-Comput Interact2019; 3 (CSCW): 1–24. Article 104.34322658
( Jobin A , IencaM, VayenaE. The global landscape of AI ethics guidelines. Nat Mach Intell2019; 1 (9): 389–99.)
Jobin A , IencaM, VayenaE. The global landscape of AI ethics guidelines. Nat Mach Intell2019; 1 (9): 389–99.Jobin A , IencaM, VayenaE. The global landscape of AI ethics guidelines. Nat Mach Intell2019; 1 (9): 389–99., Jobin A , IencaM, VayenaE. The global landscape of AI ethics guidelines. Nat Mach Intell2019; 1 (9): 389–99.
M. Decamp, C. Lindvall (2020)
Latent bias and the implementation of artificial intelligence in medicineJournal of the American Medical Informatics Association : JAMIA
( Rastogi N , et alSecurity and privacy of performing data analytics in the cloud: a three-way handshake of technology, policy, and management. J Inform Policy2015; 5: 129–54.)
Rastogi N , et alSecurity and privacy of performing data analytics in the cloud: a three-way handshake of technology, policy, and management. J Inform Policy2015; 5: 129–54.Rastogi N , et alSecurity and privacy of performing data analytics in the cloud: a three-way handshake of technology, policy, and management. J Inform Policy2015; 5: 129–54., Rastogi N , et alSecurity and privacy of performing data analytics in the cloud: a three-way handshake of technology, policy, and management. J Inform Policy2015; 5: 129–54.
(US Department of Health and Human Services, US Food and Drug Administration, Center for Devices & Radiological Health. Developing a Software Precertification Program: A Working Model (v1.0 – January 2019). 2019. https://www.fda.gov/media/119722/download Accessed October 22, 2020)
US Department of Health and Human Services, US Food and Drug Administration, Center for Devices & Radiological Health. Developing a Software Precertification Program: A Working Model (v1.0 – January 2019). 2019. https://www.fda.gov/media/119722/download Accessed October 22, 2020US Department of Health and Human Services, US Food and Drug Administration, Center for Devices & Radiological Health. Developing a Software Precertification Program: A Working Model (v1.0 – January 2019). 2019. https://www.fda.gov/media/119722/download Accessed October 22, 2020, US Department of Health and Human Services, US Food and Drug Administration, Center for Devices & Radiological Health. Developing a Software Precertification Program: A Working Model (v1.0 – January 2019). 2019. https://www.fda.gov/media/119722/download Accessed October 22, 2020
Onur Asan, A. Bayrak, Avishek Choudhury (2020)
Artificial Intelligence and Human Trust in Healthcare: Focus on CliniciansJournal of Medical Internet Research, 22
( Kasthurirathne SN , VestJR, MenachemiN, et alAssessing the capacity of social determinants of health data to augment predictive models identifying patients in need of wraparound social services. J Am Med Inform Assoc2018; 25 (1): 47–53.29177457)
Kasthurirathne SN , VestJR, MenachemiN, et alAssessing the capacity of social determinants of health data to augment predictive models identifying patients in need of wraparound social services. J Am Med Inform Assoc2018; 25 (1): 47–53.29177457Kasthurirathne SN , VestJR, MenachemiN, et alAssessing the capacity of social determinants of health data to augment predictive models identifying patients in need of wraparound social services. J Am Med Inform Assoc2018; 25 (1): 47–53.29177457, Kasthurirathne SN , VestJR, MenachemiN, et alAssessing the capacity of social determinants of health data to augment predictive models identifying patients in need of wraparound social services. J Am Med Inform Assoc2018; 25 (1): 47–53.29177457
Matthew Scherer (2015)
Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and StrategiesHarvard Journal of Law & Technology, 29
P. Phillips, Carina Hahn, Peter Fontana, David Broniatowski, Mark Przybocki (2020)
Four Principles of Explainable Artificial Intelligence
(Executive Office of the President of the United States; Artificial Intelligence Research & Development Interagency Working Group. 2016–2019 Progress Report: Advancing Artificial Intelligence R&D. 2019. https://www.nitrd.gov/pubs/AI-Research-and-Development-Progress-Report-2016-2019.pdf Accessed October 22, 2020)
Executive Office of the President of the United States; Artificial Intelligence Research & Development Interagency Working Group. 2016–2019 Progress Report: Advancing Artificial Intelligence R&D. 2019. https://www.nitrd.gov/pubs/AI-Research-and-Development-Progress-Report-2016-2019.pdf Accessed October 22, 2020Executive Office of the President of the United States; Artificial Intelligence Research & Development Interagency Working Group. 2016–2019 Progress Report: Advancing Artificial Intelligence R&D. 2019. https://www.nitrd.gov/pubs/AI-Research-and-Development-Progress-Report-2016-2019.pdf Accessed October 22, 2020, Executive Office of the President of the United States; Artificial Intelligence Research & Development Interagency Working Group. 2016–2019 Progress Report: Advancing Artificial Intelligence R&D. 2019. https://www.nitrd.gov/pubs/AI-Research-and-Development-Progress-Report-2016-2019.pdf Accessed October 22, 2020
( Liu X , CruzRS, MoherD, CalvertMJ, DennistonAK, SPIRIT-AI and CONSORT-AI Working Group. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension. Lancet Digit Health2020; 2 (10): e537–48.33328048)
Liu X , CruzRS, MoherD, CalvertMJ, DennistonAK, SPIRIT-AI and CONSORT-AI Working Group. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension. Lancet Digit Health2020; 2 (10): e537–48.33328048Liu X , CruzRS, MoherD, CalvertMJ, DennistonAK, SPIRIT-AI and CONSORT-AI Working Group. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension. Lancet Digit Health2020; 2 (10): e537–48.33328048, Liu X , CruzRS, MoherD, CalvertMJ, DennistonAK, SPIRIT-AI and CONSORT-AI Working Group. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension. Lancet Digit Health2020; 2 (10): e537–48.33328048
Racial bias in a medical algorithm favors white patients over sicker black patients. The Washington Post
Jonathan Richens, Ciarán Lee, Saurabh Johri (2020)
Improving the accuracy of medical diagnosis with causal machine learningNature Communications, 11
Carrie Cai, Samantha Winter, David Steiner, Lauren Wilcox, Michael Terry (2019)
"Hello AI": Uncovering the Onboarding Needs of Medical Practitioners for Human-AI Collaborative Decision-MakingProceedings of the ACM on Human-Computer Interaction, 3
Z. Obermeyer, Brian Powers, C. Vogeli, S. Mullainathan (2019)
Dissecting racial bias in an algorithm used to manage the health of populationsScience, 366
Will Payers Use AI to Do Prior Authorization ? And Will These AIs Make Things Better ? Healthcare IT Today . December 27 th , 2018
Sharon Davis, R. Greevy, T. Lasko, Colin Walsh, M. Matheny (2020)
Comparison of Prediction Model Performance Updating Protocols: Using a Data-Driven Testing Procedure to Guide UpdatingAMIA ... Annual Symposium proceedings. AMIA Symposium, 2019
( Jiang X , WuY, MarsoloK, et alDevelopment of a web service for analysis in a distributed network. EGEMS (Wash DC)2014; 2 (1): 1053.25848586)
Jiang X , WuY, MarsoloK, et alDevelopment of a web service for analysis in a distributed network. EGEMS (Wash DC)2014; 2 (1): 1053.25848586Jiang X , WuY, MarsoloK, et alDevelopment of a web service for analysis in a distributed network. EGEMS (Wash DC)2014; 2 (1): 1053.25848586, Jiang X , WuY, MarsoloK, et alDevelopment of a web service for analysis in a distributed network. EGEMS (Wash DC)2014; 2 (1): 1053.25848586
S. Shi, D. He, Li Li, Neeraj Kumar, M. Khan, Kim-Kwang Choo (2020)
Applications of blockchain in ensuring the security and privacy of electronic health record systems: A surveyComputers & Security, 97
( Gianfrancesco MA , TamangS, YazdanyJ, et alPotential biases in machine learning algorithms using electronic health record data. JAMA Intern Med2018; 178 (11): 1544–7.30128552)
Gianfrancesco MA , TamangS, YazdanyJ, et alPotential biases in machine learning algorithms using electronic health record data. JAMA Intern Med2018; 178 (11): 1544–7.30128552Gianfrancesco MA , TamangS, YazdanyJ, et alPotential biases in machine learning algorithms using electronic health record data. JAMA Intern Med2018; 178 (11): 1544–7.30128552, Gianfrancesco MA , TamangS, YazdanyJ, et alPotential biases in machine learning algorithms using electronic health record data. JAMA Intern Med2018; 178 (11): 1544–7.30128552
( Contreras I , VehiJ. Artificial intelligence for diabetes management and decision support: literature review. J Med Internet Res2018; 20 (5): e10775.29848472)
Contreras I , VehiJ. Artificial intelligence for diabetes management and decision support: literature review. J Med Internet Res2018; 20 (5): e10775.29848472Contreras I , VehiJ. Artificial intelligence for diabetes management and decision support: literature review. J Med Internet Res2018; 20 (5): e10775.29848472, Contreras I , VehiJ. Artificial intelligence for diabetes management and decision support: literature review. J Med Internet Res2018; 20 (5): e10775.29848472
( Strohm L , HehakayaC, RanschaertER, BoonWPC, MoorsEHM. Implementation of artificial intelligence (AI) applications in radiology: hindering and facilitating factors. Eur Radiol2020; 30 (10): 5525–32.32458173)
Strohm L , HehakayaC, RanschaertER, BoonWPC, MoorsEHM. Implementation of artificial intelligence (AI) applications in radiology: hindering and facilitating factors. Eur Radiol2020; 30 (10): 5525–32.32458173Strohm L , HehakayaC, RanschaertER, BoonWPC, MoorsEHM. Implementation of artificial intelligence (AI) applications in radiology: hindering and facilitating factors. Eur Radiol2020; 30 (10): 5525–32.32458173, Strohm L , HehakayaC, RanschaertER, BoonWPC, MoorsEHM. Implementation of artificial intelligence (AI) applications in radiology: hindering and facilitating factors. Eur Radiol2020; 30 (10): 5525–32.32458173
( Williams I. Organizational readiness for innovation in health care: some lessons from the recent literature. Health Serv Manage Res2011; 24 (4): 213–8.22040949)
Williams I. Organizational readiness for innovation in health care: some lessons from the recent literature. Health Serv Manage Res2011; 24 (4): 213–8.22040949Williams I. Organizational readiness for innovation in health care: some lessons from the recent literature. Health Serv Manage Res2011; 24 (4): 213–8.22040949, Williams I. Organizational readiness for innovation in health care: some lessons from the recent literature. Health Serv Manage Res2011; 24 (4): 213–8.22040949
( Bal BS. An introduction to medical malpractice in the United States. Clin Orthop Relat Res2009; 467 (2): 339–47.19034593)
Bal BS. An introduction to medical malpractice in the United States. Clin Orthop Relat Res2009; 467 (2): 339–47.19034593Bal BS. An introduction to medical malpractice in the United States. Clin Orthop Relat Res2009; 467 (2): 339–47.19034593, Bal BS. An introduction to medical malpractice in the United States. Clin Orthop Relat Res2009; 467 (2): 339–47.19034593
H. Liyanage, S. Liaw, J. Jonnagaddala, R. Schreiber, C. Kuziemsky, A. Terry, S. Lusignan (2019)
Artificial Intelligence in Primary Health Care: Perceptions, Issues, and ChallengesYearbook of Medical Informatics, 28
( Asan O , BayrakAE, ChoudhuryA. Artificial intelligence and human trust in healthcare: focus on clinicians. J Med Internet Res2020; 22 (6): e15154.32558657)
Asan O , BayrakAE, ChoudhuryA. Artificial intelligence and human trust in healthcare: focus on clinicians. J Med Internet Res2020; 22 (6): e15154.32558657Asan O , BayrakAE, ChoudhuryA. Artificial intelligence and human trust in healthcare: focus on clinicians. J Med Internet Res2020; 22 (6): e15154.32558657, Asan O , BayrakAE, ChoudhuryA. Artificial intelligence and human trust in healthcare: focus on clinicians. J Med Internet Res2020; 22 (6): e15154.32558657
(CMS to Strengthen Oversight of Medicare’s Accreditation Organizations. CMS Newsroom. 2018. https://www.cms.gov/newsroom/press-releases/cms-strengthen-oversight-medicares-accreditation-organizations Accessed October 14, 2020)
CMS to Strengthen Oversight of Medicare’s Accreditation Organizations. CMS Newsroom. 2018. https://www.cms.gov/newsroom/press-releases/cms-strengthen-oversight-medicares-accreditation-organizations Accessed October 14, 2020CMS to Strengthen Oversight of Medicare’s Accreditation Organizations. CMS Newsroom. 2018. https://www.cms.gov/newsroom/press-releases/cms-strengthen-oversight-medicares-accreditation-organizations Accessed October 14, 2020, CMS to Strengthen Oversight of Medicare’s Accreditation Organizations. CMS Newsroom. 2018. https://www.cms.gov/newsroom/press-releases/cms-strengthen-oversight-medicares-accreditation-organizations Accessed October 14, 2020
(2021)
The Use of Artificial Intelligence in Health Care: Trustworthiness (ANSI/ CTA-2090)
( Lee D , et alA human-in-the-loop perspective on AutoML: milestones and the road ahead. IEEE Data Eng. Bull2019; 42: 59–70.)
Lee D , et alA human-in-the-loop perspective on AutoML: milestones and the road ahead. IEEE Data Eng. Bull2019; 42: 59–70.Lee D , et alA human-in-the-loop perspective on AutoML: milestones and the road ahead. IEEE Data Eng. Bull2019; 42: 59–70., Lee D , et alA human-in-the-loop perspective on AutoML: milestones and the road ahead. IEEE Data Eng. Bull2019; 42: 59–70.
( Hernandez-Boussard T , BozkurtS, IoannidisJPA, ShahNH. MINIMAR (MINimum Information for Medical AI Reporting): Developing reporting standards for artificial intelligence in health care. J Am Med Inform Assoc2020; 27 (12): 2011–5.32594179)
Hernandez-Boussard T , BozkurtS, IoannidisJPA, ShahNH. MINIMAR (MINimum Information for Medical AI Reporting): Developing reporting standards for artificial intelligence in health care. J Am Med Inform Assoc2020; 27 (12): 2011–5.32594179Hernandez-Boussard T , BozkurtS, IoannidisJPA, ShahNH. MINIMAR (MINimum Information for Medical AI Reporting): Developing reporting standards for artificial intelligence in health care. J Am Med Inform Assoc2020; 27 (12): 2011–5.32594179, Hernandez-Boussard T , BozkurtS, IoannidisJPA, ShahNH. MINIMAR (MINimum Information for Medical AI Reporting): Developing reporting standards for artificial intelligence in health care. J Am Med Inform Assoc2020; 27 (12): 2011–5.32594179
Cécile Petitgand, Aude Motulsky, J. Denis, C. Régis (2020)
Investigating the Barriers to Physician Adoption of an Artificial Intelligence- Based Decision Support System in Emergency Care: An Interpretative Qualitative StudyStudies in health technology and informatics, 270
Peter Huber (1985)
Safety and the Second Best: The Hazards of Public Risk Management in the CourtsColumbia Law Review, 85
Lea Strohm, C. Hehakaya, E. Ranschaert, W. Boon, E. Moors (2020)
Implementation of artificial intelligence (AI) applications in radiology: hindering and facilitating factorsEuropean Radiology, 30
The National Artificial Intelligence Research and Development Strategic Plan: 2019 Update
Ross Filice, R. Ratwani (2020)
The Case for User-Centered Artificial Intelligence in Radiology.Radiology. Artificial intelligence, 2 3
Exclusive: Google Cancels AI Ethics Board in Response to Outcry
(2020)
Guidance for Regulation of Artificial Intelligence Applications. Executive Office of the President, Office of Management Budget (OMB)
M. Gianfrancesco, S. Tamang, J. Yazdany, G. Schmajuk (2018)
Potential Biases in Machine Learning Algorithms Using Electronic Health Record DataJAMA Internal Medicine, 178
Iván Contreras, J. Vehí (2018)
Artificial Intelligence for Diabetes Management and Decision Support: Literature ReviewJournal of Medical Internet Research, 20
( Fitzpatrick KK , DarcyA, VierhileM. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): a randomized controlled trial. JMIR Ment Health2017; 4 (2): e19.28588005)
Fitzpatrick KK , DarcyA, VierhileM. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): a randomized controlled trial. JMIR Ment Health2017; 4 (2): e19.28588005Fitzpatrick KK , DarcyA, VierhileM. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): a randomized controlled trial. JMIR Ment Health2017; 4 (2): e19.28588005, Fitzpatrick KK , DarcyA, VierhileM. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): a randomized controlled trial. JMIR Ment Health2017; 4 (2): e19.28588005
(KI Strategie. https://www.ki-strategie-deutschland.de/home.html Accessed September 23, 2020)
KI Strategie. https://www.ki-strategie-deutschland.de/home.html Accessed September 23, 2020KI Strategie. https://www.ki-strategie-deutschland.de/home.html Accessed September 23, 2020, KI Strategie. https://www.ki-strategie-deutschland.de/home.html Accessed September 23, 2020
Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms
( Butcher J , BeridzeI. What is the state of artificial intelligence governance globally? RUSI J 2019; 164 (5-6): 88–96.)
Butcher J , BeridzeI. What is the state of artificial intelligence governance globally? RUSI J 2019; 164 (5-6): 88–96.Butcher J , BeridzeI. What is the state of artificial intelligence governance globally? RUSI J 2019; 164 (5-6): 88–96., Butcher J , BeridzeI. What is the state of artificial intelligence governance globally? RUSI J 2019; 164 (5-6): 88–96.
( Liyanage H , LiawST, JonnagaddalaJ, et alArtificial intelligence in primary health care: perceptions, issues, and challenges. Yearb Med Inform2019; 28 (1): 41–6.31022751)
Liyanage H , LiawST, JonnagaddalaJ, et alArtificial intelligence in primary health care: perceptions, issues, and challenges. Yearb Med Inform2019; 28 (1): 41–6.31022751Liyanage H , LiawST, JonnagaddalaJ, et alArtificial intelligence in primary health care: perceptions, issues, and challenges. Yearb Med Inform2019; 28 (1): 41–6.31022751, Liyanage H , LiawST, JonnagaddalaJ, et alArtificial intelligence in primary health care: perceptions, issues, and challenges. Yearb Med Inform2019; 28 (1): 41–6.31022751
(2020)
Global ‘Techlash’ Will Move from Words to Action.
Kevin Johnson, Wei-Qi Wei, D. Weeraratne, M. Frisse, K. Misulis, K. Rhee, Juan Zhao, J. Snowdon (2020)
Precision Medicine, AI, and the Future of Personalized Health CareClinical and Translational Science, 14
J. Gabel (1998)
When Employers Choose Health Plans: Do NCQA Accreditation and HEDIS Data Count
Implementing and scaling artificial intelligence solutions: considerations for policy makers and decision makers. Health Aff Blog
( Phillips PJ , HahnAC, FontanaPC, et al (2020). Four Principles of Explainable Artificial Intelligence (Draft).10.6028/NIST.IR.8312-draft Accessed 22 October, 2020)
Phillips PJ , HahnAC, FontanaPC, et al (2020). Four Principles of Explainable Artificial Intelligence (Draft).10.6028/NIST.IR.8312-draft Accessed 22 October, 2020Phillips PJ , HahnAC, FontanaPC, et al (2020). Four Principles of Explainable Artificial Intelligence (Draft).10.6028/NIST.IR.8312-draft Accessed 22 October, 2020, Phillips PJ , HahnAC, FontanaPC, et al (2020). Four Principles of Explainable Artificial Intelligence (Draft).10.6028/NIST.IR.8312-draft Accessed 22 October, 2020
(Intelligence SCoA. The National Artificial Intelligence Research and Development Strategic Plan: 2019 Update. 2019. https://www.nitrd.gov/pubs/National-AI-RD-Strategy-2019.pdf Accessed Oct 22, 2020)
Intelligence SCoA. The National Artificial Intelligence Research and Development Strategic Plan: 2019 Update. 2019. https://www.nitrd.gov/pubs/National-AI-RD-Strategy-2019.pdf Accessed Oct 22, 2020Intelligence SCoA. The National Artificial Intelligence Research and Development Strategic Plan: 2019 Update. 2019. https://www.nitrd.gov/pubs/National-AI-RD-Strategy-2019.pdf Accessed Oct 22, 2020, Intelligence SCoA. The National Artificial Intelligence Research and Development Strategic Plan: 2019 Update. 2019. https://www.nitrd.gov/pubs/National-AI-RD-Strategy-2019.pdf Accessed Oct 22, 2020
A. Tzachor, Jess Whittlestone, L. Sundaram, Seán hÉigeartaigh (2020)
Artificial intelligence in a crisis needs ethics with urgencyNature Machine Intelligence, 2
Office of the President of the United States. Maintaining American leadership in artificial intelligence. Executive Order 13859
( Ntoutsi E , FafaliosP, GadirajuU, et alBias in data‐driven artificial intelligence systems—An introductory survey. WIREs Data Mining Knowl Discov2020; 10 (3): e1356.)
Ntoutsi E , FafaliosP, GadirajuU, et alBias in data‐driven artificial intelligence systems—An introductory survey. WIREs Data Mining Knowl Discov2020; 10 (3): e1356.Ntoutsi E , FafaliosP, GadirajuU, et alBias in data‐driven artificial intelligence systems—An introductory survey. WIREs Data Mining Knowl Discov2020; 10 (3): e1356., Ntoutsi E , FafaliosP, GadirajuU, et alBias in data‐driven artificial intelligence systems—An introductory survey. WIREs Data Mining Knowl Discov2020; 10 (3): e1356.
( Petitgand C , MotulskyA, DenisJL, RégisC. Investigating the barriers to physician adoption of an artificial intelligence- based decision support system in emergency care: an interpretative qualitative study. Stud Health Technol Inform2020; 270: 1001–5.32570532)
Petitgand C , MotulskyA, DenisJL, RégisC. Investigating the barriers to physician adoption of an artificial intelligence- based decision support system in emergency care: an interpretative qualitative study. Stud Health Technol Inform2020; 270: 1001–5.32570532Petitgand C , MotulskyA, DenisJL, RégisC. Investigating the barriers to physician adoption of an artificial intelligence- based decision support system in emergency care: an interpretative qualitative study. Stud Health Technol Inform2020; 270: 1001–5.32570532, Petitgand C , MotulskyA, DenisJL, RégisC. Investigating the barriers to physician adoption of an artificial intelligence- based decision support system in emergency care: an interpretative qualitative study. Stud Health Technol Inform2020; 270: 1001–5.32570532
Amie Barda, Christopher Horvat, H. Hochheiser (2020)
A qualitative research framework for the design of user-centered displays of explanations for machine learning model predictions in healthcareBMC Medical Informatics and Decision Making, 20
( Gerhards H , WeberK, BittnerU, FangerauH. Machine Learning Healthcare Applications (ML-HCAs) are no stand-alone systems but part of an ecosystem - a broader ethical and health technology assessment approach is needed. Am J Bioeth2020; 20 (11): 46–8.)
Gerhards H , WeberK, BittnerU, FangerauH. Machine Learning Healthcare Applications (ML-HCAs) are no stand-alone systems but part of an ecosystem - a broader ethical and health technology assessment approach is needed. Am J Bioeth2020; 20 (11): 46–8.Gerhards H , WeberK, BittnerU, FangerauH. Machine Learning Healthcare Applications (ML-HCAs) are no stand-alone systems but part of an ecosystem - a broader ethical and health technology assessment approach is needed. Am J Bioeth2020; 20 (11): 46–8., Gerhards H , WeberK, BittnerU, FangerauH. Machine Learning Healthcare Applications (ML-HCAs) are no stand-alone systems but part of an ecosystem - a broader ethical and health technology assessment approach is needed. Am J Bioeth2020; 20 (11): 46–8.
(Model Artificial Intelligence Governance Framework and Assessment Guide. World Economic Forum. https://www.weforum.org/projects/model-ai-governance-framework/ Accessed March 16, 2021)
Model Artificial Intelligence Governance Framework and Assessment Guide. World Economic Forum. https://www.weforum.org/projects/model-ai-governance-framework/ Accessed March 16, 2021Model Artificial Intelligence Governance Framework and Assessment Guide. World Economic Forum. https://www.weforum.org/projects/model-ai-governance-framework/ Accessed March 16, 2021, Model Artificial Intelligence Governance Framework and Assessment Guide. World Economic Forum. https://www.weforum.org/projects/model-ai-governance-framework/ Accessed March 16, 2021
Anna Jobin, M. Ienca, E. Vayena (2019)
Artificial Intelligence: the global landscape of ethics guidelinesArXiv, abs/1906.11668
Marco Ribeiro, Sameer Singh, Carlos Guestrin
Association for Computational Linguistics " Why Should I Trust You? " Explaining the Predictions of Any Classifier
( Ting DSW , PengL, VaradarajanAV, et alDeep learning in ophthalmology: the technical and clinical considerations. Prog Retin Eye Res2019; 72: 100759.31048019)
Ting DSW , PengL, VaradarajanAV, et alDeep learning in ophthalmology: the technical and clinical considerations. Prog Retin Eye Res2019; 72: 100759.31048019Ting DSW , PengL, VaradarajanAV, et alDeep learning in ophthalmology: the technical and clinical considerations. Prog Retin Eye Res2019; 72: 100759.31048019, Ting DSW , PengL, VaradarajanAV, et alDeep learning in ophthalmology: the technical and clinical considerations. Prog Retin Eye Res2019; 72: 100759.31048019
( Wall J , KrummelT. The digital surgeon: how big data, automation, and artificial intelligence will change surgical practice. J Pediatr Surg2020; 55S: 47–50.31767194)
Wall J , KrummelT. The digital surgeon: how big data, automation, and artificial intelligence will change surgical practice. J Pediatr Surg2020; 55S: 47–50.31767194Wall J , KrummelT. The digital surgeon: how big data, automation, and artificial intelligence will change surgical practice. J Pediatr Surg2020; 55S: 47–50.31767194, Wall J , KrummelT. The digital surgeon: how big data, automation, and artificial intelligence will change surgical practice. J Pediatr Surg2020; 55S: 47–50.31767194
Isomi Miake-Lye, Deborah Delevan, D. Ganz, B. Mittman, E. Finley (2020)
Unpacking organizational readiness for change: an updated systematic review and content analysis of assessmentsBMC Health Services Research, 20
H. Alami, P. Lehoux, J. Denis, Aude Motulsky, Cécile Petitgand, M. Savoldelli, Ronan Rouquet, M. Gagnon, D. Roy, J. Fortin (2020)
Organizational readiness for artificial intelligence in health care: insights for decision-making and practice.Journal of health organization and management, ahead-of-print ahead-of-print
( Huber P. Safety and the second best: the hazards of public risk management in the courts. Columbia Law Rev1985; 85 (2): 277–337.)
Huber P. Safety and the second best: the hazards of public risk management in the courts. Columbia Law Rev1985; 85 (2): 277–337.Huber P. Safety and the second best: the hazards of public risk management in the courts. Columbia Law Rev1985; 85 (2): 277–337., Huber P. Safety and the second best: the hazards of public risk management in the courts. Columbia Law Rev1985; 85 (2): 277–337.
N. Diakopoulos (2014)
Algorithmic Accountability Reporting: On the Investigation of Black Boxes
Beau Norgeot, Giorgio Quer, B. Beaulieu-Jones, A. Torkamani, Raquel Dias, M. Gianfrancesco, R. Arnaout, I. Kohane, S. Saria, E. Topol, Z. Obermeyer, Bin Yu, A. Butte (2020)
Minimum information about clinical artificial intelligence modeling: the MI-CLAIM checklistNature Medicine, 26
( , ZednikC. Solving the black box problem: A normative framework for explainable artificial intelligence. Philos. Technol2019; 10.1007/s13347-019-00382-7.)
, ZednikC. Solving the black box problem: A normative framework for explainable artificial intelligence. Philos. Technol2019; 10.1007/s13347-019-00382-7., ZednikC. Solving the black box problem: A normative framework for explainable artificial intelligence. Philos. Technol2019; 10.1007/s13347-019-00382-7., , ZednikC. Solving the black box problem: A normative framework for explainable artificial intelligence. Philos. Technol2019; 10.1007/s13347-019-00382-7.
S. Graham, C. Depp, Ellen Lee, Camille Nebeker, X. Tu, Ho-Cheol Kim, D. Jeste (2019)
Artificial Intelligence for Mental Health and Mental Illnesses: an OverviewCurrent Psychiatry Reports, 21
( Sohn E , RoskiJ, EscaravageS, MaloyK. Four lessons in the adoption of machine learning in health care. Health Aff Blog. May 9, 2017; doi: 10.1377/hblog20170509.059985. https://www.healthaffairs.org/do/10.1377/hblog20170509.059985/full/ Accessed April 8, 2021.)
Sohn E , RoskiJ, EscaravageS, MaloyK. Four lessons in the adoption of machine learning in health care. Health Aff Blog. May 9, 2017; doi: 10.1377/hblog20170509.059985. https://www.healthaffairs.org/do/10.1377/hblog20170509.059985/full/ Accessed April 8, 2021.Sohn E , RoskiJ, EscaravageS, MaloyK. Four lessons in the adoption of machine learning in health care. Health Aff Blog. May 9, 2017; doi: 10.1377/hblog20170509.059985. https://www.healthaffairs.org/do/10.1377/hblog20170509.059985/full/ Accessed April 8, 2021., Sohn E , RoskiJ, EscaravageS, MaloyK. Four lessons in the adoption of machine learning in health care. Health Aff Blog. May 9, 2017; doi: 10.1377/hblog20170509.059985. https://www.healthaffairs.org/do/10.1377/hblog20170509.059985/full/ Accessed April 8, 2021.
( van de Poel I . Embedding Values in Artificial Intelligence (AI) Systems. Minds and Machines. 2020. https://link.springer.com/article/10.1007/s11023-020-09537-4 Accessed October 5, 2020)
van de Poel I . Embedding Values in Artificial Intelligence (AI) Systems. Minds and Machines. 2020. https://link.springer.com/article/10.1007/s11023-020-09537-4 Accessed October 5, 2020van de Poel I . Embedding Values in Artificial Intelligence (AI) Systems. Minds and Machines. 2020. https://link.springer.com/article/10.1007/s11023-020-09537-4 Accessed October 5, 2020, van de Poel I . Embedding Values in Artificial Intelligence (AI) Systems. Minds and Machines. 2020. https://link.springer.com/article/10.1007/s11023-020-09537-4 Accessed October 5, 2020
( DeCamp M , LindvallC. Latent bias and the implementation of artificial intelligence in medicine. J Am Med Inform Assoc. 2020; 27 (12): 2020–3.)
DeCamp M , LindvallC. Latent bias and the implementation of artificial intelligence in medicine. J Am Med Inform Assoc. 2020; 27 (12): 2020–3.DeCamp M , LindvallC. Latent bias and the implementation of artificial intelligence in medicine. J Am Med Inform Assoc. 2020; 27 (12): 2020–3., DeCamp M , LindvallC. Latent bias and the implementation of artificial intelligence in medicine. J Am Med Inform Assoc. 2020; 27 (12): 2020–3.
Putting Responsible AI Into Practice
(Knight Foundation. Techlash? America’s Growing Concern with Major Technology Companies. 2020. https://knightfoundation.org/reports/techlash-americas-growing-concern-with-major-technology-companies/ Accessed March 16, 2021)
Knight Foundation. Techlash? America’s Growing Concern with Major Technology Companies. 2020. https://knightfoundation.org/reports/techlash-americas-growing-concern-with-major-technology-companies/ Accessed March 16, 2021Knight Foundation. Techlash? America’s Growing Concern with Major Technology Companies. 2020. https://knightfoundation.org/reports/techlash-americas-growing-concern-with-major-technology-companies/ Accessed March 16, 2021, Knight Foundation. Techlash? America’s Growing Concern with Major Technology Companies. 2020. https://knightfoundation.org/reports/techlash-americas-growing-concern-with-major-technology-companies/ Accessed March 16, 2021
Constanza Navarro, J. Damen, T. Takada, Steven Nijman, P. Dhiman, Jie Ma, G. Collins, R. Bajpai, R. Riley, K. Moons, L. Hooft (2020)
Protocol for a systematic review on the methodological and reporting quality of prediction model studies using machine learning techniquesBMJ Open, 10
CMS to Strengthen Oversight of Medicare's Accreditation Organizations
( Clark J , GillianK. Hadfield. “Regulatory Markets for AI Safety.” arXiv preprint arXiv:2001.000782019.)
Clark J , GillianK. Hadfield. “Regulatory Markets for AI Safety.” arXiv preprint arXiv:2001.000782019.Clark J , GillianK. Hadfield. “Regulatory Markets for AI Safety.” arXiv preprint arXiv:2001.000782019., Clark J , GillianK. Hadfield. “Regulatory Markets for AI Safety.” arXiv preprint arXiv:2001.000782019.
(US Department of Health and Human Services, US Food and Drug Administration, Center for Devices & Radiological Health. Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) - Discussion Paper and Request for Feedback. 2019. https://www.fda.gov/media/122535/download Accessed 22 October, 2020)
US Department of Health and Human Services, US Food and Drug Administration, Center for Devices & Radiological Health. Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) - Discussion Paper and Request for Feedback. 2019. https://www.fda.gov/media/122535/download Accessed 22 October, 2020US Department of Health and Human Services, US Food and Drug Administration, Center for Devices & Radiological Health. Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) - Discussion Paper and Request for Feedback. 2019. https://www.fda.gov/media/122535/download Accessed 22 October, 2020, US Department of Health and Human Services, US Food and Drug Administration, Center for Devices & Radiological Health. Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) - Discussion Paper and Request for Feedback. 2019. https://www.fda.gov/media/122535/download Accessed 22 October, 2020
Emma Beede, E. Baylor, Fred Hersch, A. Iurchenko, Lauren Wilcox, Paisan Ruamviboonsuk, Laura Vardoulakis (2020)
A Human-Centered Evaluation of a Deep Learning System Deployed in Clinics for the Detection of Diabetic RetinopathyProceedings of the 2020 CHI Conference on Human Factors in Computing Systems
The Fourth Industrial Revolution: what it means and how to respond. World Economic Forum
( Subbaswamy A , SariaS. From development to deployment: dataset shift, causality, and shift-stable models in health AI. Biostatistics2020; 21 (2): 345–52.31742354)
Subbaswamy A , SariaS. From development to deployment: dataset shift, causality, and shift-stable models in health AI. Biostatistics2020; 21 (2): 345–52.31742354Subbaswamy A , SariaS. From development to deployment: dataset shift, causality, and shift-stable models in health AI. Biostatistics2020; 21 (2): 345–52.31742354, Subbaswamy A , SariaS. From development to deployment: dataset shift, causality, and shift-stable models in health AI. Biostatistics2020; 21 (2): 345–52.31742354
(“Exclusive: Google Cancels AI Ethics Board in Response to Outcry”. Vox. 2019. https://www.vox.com/future-perfect/2019/4/4/18295933/google-cancels-ai-ethics-board Accessed September 23, 2020)
“Exclusive: Google Cancels AI Ethics Board in Response to Outcry”. Vox. 2019. https://www.vox.com/future-perfect/2019/4/4/18295933/google-cancels-ai-ethics-board Accessed September 23, 2020“Exclusive: Google Cancels AI Ethics Board in Response to Outcry”. Vox. 2019. https://www.vox.com/future-perfect/2019/4/4/18295933/google-cancels-ai-ethics-board Accessed September 23, 2020, “Exclusive: Google Cancels AI Ethics Board in Response to Outcry”. Vox. 2019. https://www.vox.com/future-perfect/2019/4/4/18295933/google-cancels-ai-ethics-board Accessed September 23, 2020
( D’Onfro J. Google Scraps Its AI Ethics Board Less Than Two Weeks After Launch in the Wake of Employee Protest. Forbes.https://www.forbes.com/sites/jilliandonfro/2019/04/04/google-cancels-its-ai-ethics-board-less-than-two-weeks-after-launch-in-the-wake-of-employee-protest/ Accessed January 8, 2021)
D’Onfro J. Google Scraps Its AI Ethics Board Less Than Two Weeks After Launch in the Wake of Employee Protest. Forbes.https://www.forbes.com/sites/jilliandonfro/2019/04/04/google-cancels-its-ai-ethics-board-less-than-two-weeks-after-launch-in-the-wake-of-employee-protest/ Accessed January 8, 2021D’Onfro J. Google Scraps Its AI Ethics Board Less Than Two Weeks After Launch in the Wake of Employee Protest. Forbes.https://www.forbes.com/sites/jilliandonfro/2019/04/04/google-cancels-its-ai-ethics-board-less-than-two-weeks-after-launch-in-the-wake-of-employee-protest/ Accessed January 8, 2021, D’Onfro J. Google Scraps Its AI Ethics Board Less Than Two Weeks After Launch in the Wake of Employee Protest. Forbes.https://www.forbes.com/sites/jilliandonfro/2019/04/04/google-cancels-its-ai-ethics-board-less-than-two-weeks-after-launch-in-the-wake-of-employee-protest/ Accessed January 8, 2021
(When Employers Choose Health Plans: Do NCQA Accreditation and HEDIS Data Count | Commonwealth Fund. 1998. https://www.commonwealthfund.org/publications/fund-reports/1998/aug/when-employers-choose-health-plans-do-ncqa-accreditation-and Accessed January 8, 2021)
When Employers Choose Health Plans: Do NCQA Accreditation and HEDIS Data Count | Commonwealth Fund. 1998. https://www.commonwealthfund.org/publications/fund-reports/1998/aug/when-employers-choose-health-plans-do-ncqa-accreditation-and Accessed January 8, 2021When Employers Choose Health Plans: Do NCQA Accreditation and HEDIS Data Count | Commonwealth Fund. 1998. https://www.commonwealthfund.org/publications/fund-reports/1998/aug/when-employers-choose-health-plans-do-ncqa-accreditation-and Accessed January 8, 2021, When Employers Choose Health Plans: Do NCQA Accreditation and HEDIS Data Count | Commonwealth Fund. 1998. https://www.commonwealthfund.org/publications/fund-reports/1998/aug/when-employers-choose-health-plans-do-ncqa-accreditation-and Accessed January 8, 2021
( Rivera SC , LiuX, ChanAW, DennistonAK, CalvertMJ, SPIRIT-AI and CONSORT-AI Working Group. Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI Extension. BMJ2020; 370: m3210.32907797)
Rivera SC , LiuX, ChanAW, DennistonAK, CalvertMJ, SPIRIT-AI and CONSORT-AI Working Group. Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI Extension. BMJ2020; 370: m3210.32907797Rivera SC , LiuX, ChanAW, DennistonAK, CalvertMJ, SPIRIT-AI and CONSORT-AI Working Group. Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI Extension. BMJ2020; 370: m3210.32907797, Rivera SC , LiuX, ChanAW, DennistonAK, CalvertMJ, SPIRIT-AI and CONSORT-AI Working Group. Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI Extension. BMJ2020; 370: m3210.32907797
( Holzinger A , PlassM, Kickmeier-RustM, et alInteractive machine learning: experimental evidence for the human in the algorithmic loop. Appl Intell July 2019; 49 (7): 2401–14.)
Holzinger A , PlassM, Kickmeier-RustM, et alInteractive machine learning: experimental evidence for the human in the algorithmic loop. Appl Intell July 2019; 49 (7): 2401–14.Holzinger A , PlassM, Kickmeier-RustM, et alInteractive machine learning: experimental evidence for the human in the algorithmic loop. Appl Intell July 2019; 49 (7): 2401–14., Holzinger A , PlassM, Kickmeier-RustM, et alInteractive machine learning: experimental evidence for the human in the algorithmic loop. Appl Intell July 2019; 49 (7): 2401–14.
Adarsh Subbaswamy, S. Saria (2019)
From development to deployment: dataset shift, causality, and shift-stable models in health AI.Biostatistics
F. Mayer, G. Gereffi (2010)
Regulation and economic globalization: Prospects and limits of private governanceBusiness and Politics, 12
Carlos Zednik (2019)
Solving the Black Box Problem: A Normative Framework for Explainable Artificial IntelligencePhilosophy & Technology, 34
S. Maurer (2015)
The New Self-Governance: A Theoretical FrameworkBusiness and Politics, 19
Iestyn Williams (2011)
Organizational readiness for innovation in health care: some lessons from the recent literatureHealth Services Management Research, 24
(White Paper on Artificial Intelligence: A European Approach to Excellence and Trust. European Commission - European Commission. 2020. https://ec.europa.eu/info/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en Accessed September 23, 2020)
White Paper on Artificial Intelligence: A European Approach to Excellence and Trust. European Commission - European Commission. 2020. https://ec.europa.eu/info/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en Accessed September 23, 2020White Paper on Artificial Intelligence: A European Approach to Excellence and Trust. European Commission - European Commission. 2020. https://ec.europa.eu/info/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en Accessed September 23, 2020, White Paper on Artificial Intelligence: A European Approach to Excellence and Trust. European Commission - European Commission. 2020. https://ec.europa.eu/info/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en Accessed September 23, 2020
D. Rojas-Gualdrón (2022)
Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. National Academy of Medicine. Una reseñaCES Medicina
( Maurer SM. The new self-governance: a theoretical framework. Bus Polit2017; 19 (1): 41–67.)
Maurer SM. The new self-governance: a theoretical framework. Bus Polit2017; 19 (1): 41–67.Maurer SM. The new self-governance: a theoretical framework. Bus Polit2017; 19 (1): 41–67., Maurer SM. The new self-governance: a theoretical framework. Bus Polit2017; 19 (1): 41–67.
Stephanie Eaneff, Z. Obermeyer, A. Butte (2020)
The Case for Algorithmic Stewardship for Artificial Intelligence and Machine Learning Technologies.JAMA
D. Lee, Stephen Macke, Doris Xin, Angela Lee, Silu Huang, Aditya Parameswaran (2019)
A Human-in-the-loop Perspective on AutoML: Milestones and the Road AheadIEEE Data Eng. Bull., 42
( Crowley RJ , TanYJ, IoannidisJPA. Empirical assessment of bias in machine learning diagnostic test accuracy studies. J Am Med Inform Assoc2020; 27 (7): 1092–101.32548642)
Crowley RJ , TanYJ, IoannidisJPA. Empirical assessment of bias in machine learning diagnostic test accuracy studies. J Am Med Inform Assoc2020; 27 (7): 1092–101.32548642Crowley RJ , TanYJ, IoannidisJPA. Empirical assessment of bias in machine learning diagnostic test accuracy studies. J Am Med Inform Assoc2020; 27 (7): 1092–101.32548642, Crowley RJ , TanYJ, IoannidisJPA. Empirical assessment of bias in machine learning diagnostic test accuracy studies. J Am Med Inform Assoc2020; 27 (7): 1092–101.32548642
(The Use of Artificial Intelligence in Health Care: Trustworthiness (ANSI/CTA-2090). Consumer Technology Association. https://shop.cta.tech/products/the-use-of-artificial-intelligence-in-healthcare-trustworthiness-cta-2090 Accessed March 16, 2021)
The Use of Artificial Intelligence in Health Care: Trustworthiness (ANSI/CTA-2090). Consumer Technology Association. https://shop.cta.tech/products/the-use-of-artificial-intelligence-in-healthcare-trustworthiness-cta-2090 Accessed March 16, 2021The Use of Artificial Intelligence in Health Care: Trustworthiness (ANSI/CTA-2090). Consumer Technology Association. https://shop.cta.tech/products/the-use-of-artificial-intelligence-in-healthcare-trustworthiness-cta-2090 Accessed March 16, 2021, The Use of Artificial Intelligence in Health Care: Trustworthiness (ANSI/CTA-2090). Consumer Technology Association. https://shop.cta.tech/products/the-use-of-artificial-intelligence-in-healthcare-trustworthiness-cta-2090 Accessed March 16, 2021
M. Zitnik, Monica Agrawal, J. Leskovec (2018)
Modeling polypharmacy side effects with graph convolutional networksBioinformatics, 34
( Kelly CJ , KarthikesalingamA, SuleymanM, et alKey challenges for delivering clinical impact with artificial intelligence. BMC Med2019; 17 (1): 195.31665002)
Kelly CJ , KarthikesalingamA, SuleymanM, et alKey challenges for delivering clinical impact with artificial intelligence. BMC Med2019; 17 (1): 195.31665002Kelly CJ , KarthikesalingamA, SuleymanM, et alKey challenges for delivering clinical impact with artificial intelligence. BMC Med2019; 17 (1): 195.31665002, Kelly CJ , KarthikesalingamA, SuleymanM, et alKey challenges for delivering clinical impact with artificial intelligence. BMC Med2019; 17 (1): 195.31665002
S. Payrovnaziri, Zhaoyi Chen, Pablo Rengifo-Moreno, Tim Miller, J. Bian, Jonathan Chen, Xiuwen Liu, Zhe He (2020)
Explainable artificial intelligence models using real-world electronic health record data: a systematic scoping reviewJournal of the American Medical Informatics Association : JAMIA
Jack Clark, Gillian Hadfield (2019)
Regulatory Markets for AI SafetyArXiv, abs/2001.00078
( Alami H, Lehoux P, Denis J-L , et al Organizational readiness for artificial intelligence in health care: insights for decision-making and practice. J Health Organ Manag2020; 35 (1): 106–14.)
Alami H, Lehoux P, Denis J-L , et al Organizational readiness for artificial intelligence in health care: insights for decision-making and practice. J Health Organ Manag2020; 35 (1): 106–14.Alami H, Lehoux P, Denis J-L , et al Organizational readiness for artificial intelligence in health care: insights for decision-making and practice. J Health Organ Manag2020; 35 (1): 106–14., Alami H, Lehoux P, Denis J-L , et al Organizational readiness for artificial intelligence in health care: insights for decision-making and practice. J Health Organ Manag2020; 35 (1): 106–14.
( Tzachor A , WhittlestoneJ, SundaramL, hÉigeartaighSÓ. Artificial intelligence in a crisis needs ethics with urgency. Nat Mach Intell2020; 2 (7): 365–6.)
Tzachor A , WhittlestoneJ, SundaramL, hÉigeartaighSÓ. Artificial intelligence in a crisis needs ethics with urgency. Nat Mach Intell2020; 2 (7): 365–6.Tzachor A , WhittlestoneJ, SundaramL, hÉigeartaighSÓ. Artificial intelligence in a crisis needs ethics with urgency. Nat Mach Intell2020; 2 (7): 365–6., Tzachor A , WhittlestoneJ, SundaramL, hÉigeartaighSÓ. Artificial intelligence in a crisis needs ethics with urgency. Nat Mach Intell2020; 2 (7): 365–6.
( Richens JG , LeeCM, JohriS. Improving the accuracy of medical diagnosis with causal machine learning. Nat Commun2020; 11 (1): 3923.32782264)
Richens JG , LeeCM, JohriS. Improving the accuracy of medical diagnosis with causal machine learning. Nat Commun2020; 11 (1): 3923.32782264Richens JG , LeeCM, JohriS. Improving the accuracy of medical diagnosis with causal machine learning. Nat Commun2020; 11 (1): 3923.32782264, Richens JG , LeeCM, JohriS. Improving the accuracy of medical diagnosis with causal machine learning. Nat Commun2020; 11 (1): 3923.32782264
( Reddy S , AllanS, CoghlanS, CooperP. A governance model for the application of AI in health care. J Am Med Inform Assoc2020; 27 (3): 491–7.31682262)
Reddy S , AllanS, CoghlanS, CooperP. A governance model for the application of AI in health care. J Am Med Inform Assoc2020; 27 (3): 491–7.31682262Reddy S , AllanS, CoghlanS, CooperP. A governance model for the application of AI in health care. J Am Med Inform Assoc2020; 27 (3): 491–7.31682262, Reddy S , AllanS, CoghlanS, CooperP. A governance model for the application of AI in health care. J Am Med Inform Assoc2020; 27 (3): 491–7.31682262
I. Poel (2020)
Embedding Values in Artificial Intelligence (AI) SystemsMinds Mach., 30
D. Ting, L. Peng, A. Varadarajan, P. Keane, P. Burlina, M. Chiang, L. Schmetterer, L. Pasquale, N. Bressler, D. Webster, M. Abràmoff, T. Wong (2019)
Deep learning in ophthalmology: The technical and clinical considerationsProgress in Retinal and Eye Research, 72
( Mayer F , GereffiG. Regulation and economic globalization: prospects and limits of private governance. Bus Polit2010; 12 (3): 1–25.)
Mayer F , GereffiG. Regulation and economic globalization: prospects and limits of private governance. Bus Polit2010; 12 (3): 1–25.Mayer F , GereffiG. Regulation and economic globalization: prospects and limits of private governance. Bus Polit2010; 12 (3): 1–25., Mayer F , GereffiG. Regulation and economic globalization: prospects and limits of private governance. Bus Polit2010; 12 (3): 1–25.
( Barda AJ , HorvatCM, HochheiserH. A qualitative research framework for the design of user-centered displays of explanations for machine learning model predictions in healthcare. BMC Med Inform Decis Mak2020; 20 (1): 257.33032582)
Barda AJ , HorvatCM, HochheiserH. A qualitative research framework for the design of user-centered displays of explanations for machine learning model predictions in healthcare. BMC Med Inform Decis Mak2020; 20 (1): 257.33032582Barda AJ , HorvatCM, HochheiserH. A qualitative research framework for the design of user-centered displays of explanations for machine learning model predictions in healthcare. BMC Med Inform Decis Mak2020; 20 (1): 257.33032582, Barda AJ , HorvatCM, HochheiserH. A qualitative research framework for the design of user-centered displays of explanations for machine learning model predictions in healthcare. BMC Med Inform Decis Mak2020; 20 (1): 257.33032582
White Paper on Artificial Intelligence: A European Approach to Excellence and Trust. European Commission -European Commission
( Obermeyer Z , PowersB, VogeliC, et alDissecting racial bias in an algorithm used to manage the health of populations. Science2019; 366 (6464): 447–53.31649194)
Obermeyer Z , PowersB, VogeliC, et alDissecting racial bias in an algorithm used to manage the health of populations. Science2019; 366 (6464): 447–53.31649194Obermeyer Z , PowersB, VogeliC, et alDissecting racial bias in an algorithm used to manage the health of populations. Science2019; 366 (6464): 447–53.31649194, Obermeyer Z , PowersB, VogeliC, et alDissecting racial bias in an algorithm used to manage the health of populations. Science2019; 366 (6464): 447–53.31649194
Eirini Ntoutsi, P. Fafalios, U. Gadiraju, Vasileios Iosifidis, W. Nejdl, Maria-Esther Vidal, S. Ruggieri, F. Turini, S. Papadopoulos, Emmanouil Krasanakis, I. Kompatsiaris, K. Kinder-Kurlanda, Claudia Wagner, F. Karimi, Miriam Fernández, Harith Alani, Bettina Berendt, Tina Kruegel, C. Heinze, Klaus Broelemann, G. Kasneci, T. Tiropanis, Steffen Staab (2020)
Bias in data‐driven artificial intelligence systems—An introductory surveyWiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 10
Barbara Desanto (2021)
US Food and Drug AdministrationThe Palgrave Encyclopedia of Interest Groups, Lobbying and Public Affairs
( Ross C. IBM’s Watson supercomputer recommended ‘unsafe and incorrect’ cancer treatments, internal documents show. Stat+ News June 25, 2018. https://www.statnews.com/2018/07/25/ibm-watson-recommended-unsafe-incorrect-treatments/ Accessed 22 October, 2020)
Ross C. IBM’s Watson supercomputer recommended ‘unsafe and incorrect’ cancer treatments, internal documents show. Stat+ News June 25, 2018. https://www.statnews.com/2018/07/25/ibm-watson-recommended-unsafe-incorrect-treatments/ Accessed 22 October, 2020Ross C. IBM’s Watson supercomputer recommended ‘unsafe and incorrect’ cancer treatments, internal documents show. Stat+ News June 25, 2018. https://www.statnews.com/2018/07/25/ibm-watson-recommended-unsafe-incorrect-treatments/ Accessed 22 October, 2020, Ross C. IBM’s Watson supercomputer recommended ‘unsafe and incorrect’ cancer treatments, internal documents show. Stat+ News June 25, 2018. https://www.statnews.com/2018/07/25/ibm-watson-recommended-unsafe-incorrect-treatments/ Accessed 22 October, 2020
(US Department of Health and Human Services, US Food and Drug Administration, Center for Devices & Radiological Health. Developing the Software Precertification Program: Summary of Learnings and Ongoing Activities. 2020. https://www.fda.gov/media/142107/download Accessed October 22, 2020)
US Department of Health and Human Services, US Food and Drug Administration, Center for Devices & Radiological Health. Developing the Software Precertification Program: Summary of Learnings and Ongoing Activities. 2020. https://www.fda.gov/media/142107/download Accessed October 22, 2020US Department of Health and Human Services, US Food and Drug Administration, Center for Devices & Radiological Health. Developing the Software Precertification Program: Summary of Learnings and Ongoing Activities. 2020. https://www.fda.gov/media/142107/download Accessed October 22, 2020, US Department of Health and Human Services, US Food and Drug Administration, Center for Devices & Radiological Health. Developing the Software Precertification Program: Summary of Learnings and Ongoing Activities. 2020. https://www.fda.gov/media/142107/download Accessed October 22, 2020
(Office of the President of the United States. Maintaining American leadership in artificial intelligence. Executive Order 13859. 2019. https://www.whitehouse.gov/presidential-actions/executive-order-maintaining-american-leadership-artificial-intelligence/ Accessed 22 October, 2020)
Office of the President of the United States. Maintaining American leadership in artificial intelligence. Executive Order 13859. 2019. https://www.whitehouse.gov/presidential-actions/executive-order-maintaining-american-leadership-artificial-intelligence/ Accessed 22 October, 2020Office of the President of the United States. Maintaining American leadership in artificial intelligence. Executive Order 13859. 2019. https://www.whitehouse.gov/presidential-actions/executive-order-maintaining-american-leadership-artificial-intelligence/ Accessed 22 October, 2020, Office of the President of the United States. Maintaining American leadership in artificial intelligence. Executive Order 13859. 2019. https://www.whitehouse.gov/presidential-actions/executive-order-maintaining-american-leadership-artificial-intelligence/ Accessed 22 October, 2020
( Davis SE , GreevyRA, LaskoTA, WalshCG, MathenyME. Comparison of prediction model performance updating protocols: using a data-driven testing procedure to guide updating. AMIA Annu Symp Proc2019; 2019: 1002–10.32308897)
Davis SE , GreevyRA, LaskoTA, WalshCG, MathenyME. Comparison of prediction model performance updating protocols: using a data-driven testing procedure to guide updating. AMIA Annu Symp Proc2019; 2019: 1002–10.32308897Davis SE , GreevyRA, LaskoTA, WalshCG, MathenyME. Comparison of prediction model performance updating protocols: using a data-driven testing procedure to guide updating. AMIA Annu Symp Proc2019; 2019: 1002–10.32308897, Davis SE , GreevyRA, LaskoTA, WalshCG, MathenyME. Comparison of prediction model performance updating protocols: using a data-driven testing procedure to guide updating. AMIA Annu Symp Proc2019; 2019: 1002–10.32308897
( Eaneff S , ObermeyerZ, ButteAJ. The case for algorithmic stewardship of artificial intelligence and machine learning technologies. JAMA2020; 324 (14): 1397.32926087)
Eaneff S , ObermeyerZ, ButteAJ. The case for algorithmic stewardship of artificial intelligence and machine learning technologies. JAMA2020; 324 (14): 1397.32926087Eaneff S , ObermeyerZ, ButteAJ. The case for algorithmic stewardship of artificial intelligence and machine learning technologies. JAMA2020; 324 (14): 1397.32926087, Eaneff S , ObermeyerZ, ButteAJ. The case for algorithmic stewardship of artificial intelligence and machine learning technologies. JAMA2020; 324 (14): 1397.32926087
( Ribeiro MT , SinghS, GuestrinC. “Why should i trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. San Francisco, CA: Association for Computing Machinery; 2016:1135–44.)
Ribeiro MT , SinghS, GuestrinC. “Why should i trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. San Francisco, CA: Association for Computing Machinery; 2016:1135–44.Ribeiro MT , SinghS, GuestrinC. “Why should i trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. San Francisco, CA: Association for Computing Machinery; 2016:1135–44., Ribeiro MT , SinghS, GuestrinC. “Why should i trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. San Francisco, CA: Association for Computing Machinery; 2016:1135–44.
( Graham S , DeppC, LeeEE, et alArtificial intelligence for mental health and mental illnesses: an overview. Curr Psychiatry Rep2019; 21 (11): 116.31701320)
Graham S , DeppC, LeeEE, et alArtificial intelligence for mental health and mental illnesses: an overview. Curr Psychiatry Rep2019; 21 (11): 116.31701320Graham S , DeppC, LeeEE, et alArtificial intelligence for mental health and mental illnesses: an overview. Curr Psychiatry Rep2019; 21 (11): 116.31701320, Graham S , DeppC, LeeEE, et alArtificial intelligence for mental health and mental illnesses: an overview. Curr Psychiatry Rep2019; 21 (11): 116.31701320
(Memorandum M-21-06, Guidance for Regulation of Artificial Intelligence Applications. Executive Office of the President, Office of Management Budget (OMB). 2020. https://www.whitehouse.gov/wp-content/uploads/2020/11/M-21-06.pdf Accessed March 16, 2021.)
Memorandum M-21-06, Guidance for Regulation of Artificial Intelligence Applications. Executive Office of the President, Office of Management Budget (OMB). 2020. https://www.whitehouse.gov/wp-content/uploads/2020/11/M-21-06.pdf Accessed March 16, 2021.Memorandum M-21-06, Guidance for Regulation of Artificial Intelligence Applications. Executive Office of the President, Office of Management Budget (OMB). 2020. https://www.whitehouse.gov/wp-content/uploads/2020/11/M-21-06.pdf Accessed March 16, 2021., Memorandum M-21-06, Guidance for Regulation of Artificial Intelligence Applications. Executive Office of the President, Office of Management Budget (OMB). 2020. https://www.whitehouse.gov/wp-content/uploads/2020/11/M-21-06.pdf Accessed March 16, 2021.
(2019)
Google Scraps Its AI Ethics Board Less Than Two Weeks After Launch in the Wake of Employee Protest
( Norgeot B , QuerG, Beaulieu-JonesBK, et alMinimum information about clinical artificial intelligence modeling: the MI-CLAIM checklist. Nat Med2020; 26 (9): 1320–4. doi:10.1038/s41591-020-1041-y32908275)
Norgeot B , QuerG, Beaulieu-JonesBK, et alMinimum information about clinical artificial intelligence modeling: the MI-CLAIM checklist. Nat Med2020; 26 (9): 1320–4. doi:10.1038/s41591-020-1041-y32908275Norgeot B , QuerG, Beaulieu-JonesBK, et alMinimum information about clinical artificial intelligence modeling: the MI-CLAIM checklist. Nat Med2020; 26 (9): 1320–4. doi:10.1038/s41591-020-1041-y32908275, Norgeot B , QuerG, Beaulieu-JonesBK, et alMinimum information about clinical artificial intelligence modeling: the MI-CLAIM checklist. Nat Med2020; 26 (9): 1320–4. doi:10.1038/s41591-020-1041-y32908275
D. Solomon, R. Rudin (2020)
Digital health technologies: opportunities and challenges in rheumatologyNature Reviews Rheumatology, 16
B. Bal (2009)
An Introduction to Medical Malpractice in the United StatesClinical Orthopaedics and Related Research, 467
( Mongan J , MoyL, CharlesE, KahnJ. Checklist for Artificial Intelligence in Medical Imaging (CLAIM): a guide for authors and reviewers. Radiology2020; 2 (2): e200029.)
Mongan J , MoyL, CharlesE, KahnJ. Checklist for Artificial Intelligence in Medical Imaging (CLAIM): a guide for authors and reviewers. Radiology2020; 2 (2): e200029.Mongan J , MoyL, CharlesE, KahnJ. Checklist for Artificial Intelligence in Medical Imaging (CLAIM): a guide for authors and reviewers. Radiology2020; 2 (2): e200029., Mongan J , MoyL, CharlesE, KahnJ. Checklist for Artificial Intelligence in Medical Imaging (CLAIM): a guide for authors and reviewers. Radiology2020; 2 (2): e200029.
(2017)
Four lessons in the adoption of machine learning in health care
(2016)
Executive Office of the President of the United States
(US Department of Health and Human Services, US Food and Drug Administration, Center for Devices & Radiological Health, Digital Health Program. Digital Health Innovation Action Plan. 2017. https://www.fda.gov/media/106331/download Accessed Oct 22, 2020)
US Department of Health and Human Services, US Food and Drug Administration, Center for Devices & Radiological Health, Digital Health Program. Digital Health Innovation Action Plan. 2017. https://www.fda.gov/media/106331/download Accessed Oct 22, 2020US Department of Health and Human Services, US Food and Drug Administration, Center for Devices & Radiological Health, Digital Health Program. Digital Health Innovation Action Plan. 2017. https://www.fda.gov/media/106331/download Accessed Oct 22, 2020, US Department of Health and Human Services, US Food and Drug Administration, Center for Devices & Radiological Health, Digital Health Program. Digital Health Innovation Action Plan. 2017. https://www.fda.gov/media/106331/download Accessed Oct 22, 2020
S. Rivera, Xiaoxuan Liu, A. Chan, A. Denniston, M. Calvert (2020)
Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI extensionNature Medicine, 26
Techlash ? America ’ s Growing Concern with Major Technology Companies . 2020
(Putting Responsible AI Into Practice. https://sloanreview.mit.edu/article/putting-responsible-ai-into-practice/ Accessed January 8, 2021)
Putting Responsible AI Into Practice. https://sloanreview.mit.edu/article/putting-responsible-ai-into-practice/ Accessed January 8, 2021Putting Responsible AI Into Practice. https://sloanreview.mit.edu/article/putting-responsible-ai-into-practice/ Accessed January 8, 2021, Putting Responsible AI Into Practice. https://sloanreview.mit.edu/article/putting-responsible-ai-into-practice/ Accessed January 8, 2021
Christopher Kelly, A. Karthikesalingam, Mustafa Suleyman, Greg Corrado, Dominic King (2019)
Key challenges for delivering clinical impact with artificial intelligenceBMC Medicine, 17
M. Pedersen, K. Verspoor, M. Jenkinson, M. Law, D. Abbott, G. Jackson (2020)
Artificial intelligence for clinical decision support in neurologyBrain Communications, 2
( Solomon DH , RudinRS. Digital health technologies: opportunities and challenges in rheumatology. Nat Rev Rheumatol2020; 16 (9): 525–35.32709998)
Solomon DH , RudinRS. Digital health technologies: opportunities and challenges in rheumatology. Nat Rev Rheumatol2020; 16 (9): 525–35.32709998Solomon DH , RudinRS. Digital health technologies: opportunities and challenges in rheumatology. Nat Rev Rheumatol2020; 16 (9): 525–35.32709998, Solomon DH , RudinRS. Digital health technologies: opportunities and challenges in rheumatology. Nat Rev Rheumatol2020; 16 (9): 525–35.32709998
( Diakopoulos N. Algorithmic Accountability Reporting: on the Investigation of Black Boxes. 2014. http://academiccommons.columbia.edu, doi:10.7916/D8ZK5TW2)
Diakopoulos N. Algorithmic Accountability Reporting: on the Investigation of Black Boxes. 2014. http://academiccommons.columbia.edu, doi:10.7916/D8ZK5TW2Diakopoulos N. Algorithmic Accountability Reporting: on the Investigation of Black Boxes. 2014. http://academiccommons.columbia.edu, doi:10.7916/D8ZK5TW2, Diakopoulos N. Algorithmic Accountability Reporting: on the Investigation of Black Boxes. 2014. http://academiccommons.columbia.edu, doi:10.7916/D8ZK5TW2
Journal of the American Medical Informatics Association, 28(7), 2021, 1582–1590 doi: 10.1093/jamia/ocab065 Advance Access Publication Date: 25 April 2021 Perspective Perspective 1 1 1 1 Joachim Roski, Ezekiel J. Maier, Kevin Vigilante, Elizabeth A. Kane, and 2,3 Michael E. Matheny 1 2 Booz Allen Hamilton, Washington, DC, USA, Departments of Biomedical Informatics, Biostatistics, and Medicine, Vanderbilt University Medical Center, Nashville, Tennessee, USA, and Geriatric Research Education and Clinical Care Center, Tennessee Valley Healthcare System VA, Nashville, Tennessee, USA Corresponding Author: Joachim Roski, PhD, MPH, Booz Allen Hamilton, 2941 Fairview Park Drive, Falls Church, VA 22042, USA (Roski_Joachim@bah.com) Received 23 October 2020; Revised 17 March 2020; Editorial Decision 18 March 2021; Accepted 26 March 2021 ABSTRACT Artificial intelligence (AI) is critical to harnessing value from exponentially growing health and healthcare data. Expectations are high for AI solutions to effectively address current health challenges. However, there have been prior periods of enthusiasm for AI followed by periods of disillusionment, reduced investments, and prog- ress, known as “AI Winters.” We are now at risk of another AI Winter in health/healthcare due to increasing pub- licity of AI solutions that are not representing touted breakthroughs, and thereby decreasing trust of users in AI. In this article, we first highlight recently published literature on AI risks and mitigation strategies that would be relevant for groups considering designing, implementing, and promoting self-governance. We then describe a process for how a diverse group of stakeholders could develop and define standards for promoting trust, as well as AI risk-mitigating practices through greater industry self-governance. We also describe how adherence to such standards could be verified, specifically through certification/accreditation. Self-governance could be encouraged by governments to complement existing regulatory schema or legislative efforts to mitigate AI risks. Greater adoption of industry self-governance could fill a critical gap to construct a more comprehensive approach to the governance of AI solutions than US legislation/regulations currently encompass. In this more comprehensive approach, AI developers, AI users, and government/legislators all have critical roles to play to advance practices that maintain trust in AI and prevent another AI Winter. Key words: artificial intelligence/ethics, artificial intelligence/organization and administration, certification, accreditation, policy making However, prior periods of AI enthusiasm were followed by peri- INTRODUCTION ods of disillusionment, known as “AI Winters,” where AI invest- Artificial intelligence (AI) has been touted as critical to harnessing ment and adoption withered. We are now at risk of another AI value from exponentially growing health and healthcare data. AI Winter if current heightened expectations for AI solutions are not can be used for information synthesis, clinical decision support, pop- met by commensurate performance. Recent examples that highlight ulation health interventions, business analytics, patient self-care and the growing concern over inappropriate and disappointing AI solu- engagement, research, and many other use cases. Clinician, patient, tions include racial bias in algorithms supporting healthcare deci- and investor expectations are high for AI technologies to effectively 2,3 sion-making, unexpected poor performance in cancer diagnostic address contemporary health challenges. V The Author(s) 2021. Published by Oxford University Press on behalf of the American Medical Informatics Association. This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs licence (http://creativecommons.org/licenses/ by-nc-nd/4.0/), which permits non-commercial reproduction and distribution of the work, in any medium, provided the original work is not altered or transformed in any way, and that the work is properly cited. For commercial re-use, please contact journals.permissions@oup.com 1582 PHASE 1 PHASE 2 Journal of the American Medical Informatics Association, 2021, Vol. 28, No. 7 1583 support, or inferior performance when deploying AI solutions in implementation. Phase 4 focuses on continued maintenance and sus- real-world environments. Such AI risks may be considered a tainment of implemented AI. “public risk,” denoting threats to human health or safety that are We have summarized evidence for 10 groups of AI risks and 14 “centrally or mass-produced, broadly distributed, and largely out- groups of associated evidence-based mitigation practices aligned to side the risk bearers’ direct understanding and control.” The pub- each phase of the NAM Life cycle in Table 1. While it is beyond the lic’s concerns about such risks that could contribute to a “techlash” scope of this manuscript to provide an exhaustive summary of the or AI Winter have recently been documented. relevant literature, Table 1 can serve as a convenient summary for In a seminal report by the National Academy of Medicine stakeholders interested in translating evidence-based practices into (NAM), the authors detailed early evidence for promising AI solu- future performance standards. tions for use by patients, clinicians, administrators, public health 1,8–12 officials, and researchers. In this article, we expand on that STRENGTHENING INDUSTRY SELF- work by identifying 10 groups of widespread AI risks and 14 groups GOVERNANCE TO PROMOTE TRUST- of recently identified mitigation strategies aligned to NAM’s AI im- plementation life cycle. ENHANCING PRACTICES 13,14 While AI governance efforts have been proposed previously, Evidence-based AI risk mitigation practices should be more widely it remains unclear who (eg, government vs private sector/industry) is implemented by AI developers and implementers. Wider implemen- best positioned or likely to take specific actions to manage AI risks tation could be ensured through government regulation of AI. How- and ensure continued trust across a broad spectrum of AI solutions. 66 ever, such regulation is largely lacking in the US and elsewhere. The need for industry self-governance, which refers to the collective, Additionally, an initial group of AI developers, implementers, and voluntary actions of industry members, typically arises from broad other stakeholders could create new market expectations through societal concerns and public risks that governments may not be ade- collective, voluntary actions—industry self-governance—to identify, quately addressing in their legislative or regulatory efforts. In this implement, and monitor adherence to risk mitigation practice manuscript, we describe how AI risk mitigation practices could be 67 standards. promulgated through strengthened industry self-governance, specifi- Industry self-governance can be contrasted with organizational cally through certification and accreditation of AI development and self-governance. Organizational self-governance refers to the poli- implementation organizations. We also describe how such self- cies and governance processes that a single organization relies on to governance efforts could complement current government regula- provide overall direction to its enterprise, guide executive actions, tions and tort law to maintain trust in a broad spectrum of AI solu- and establish expectations for accountability. Many prominent tions for clinical, population health, research, healthcare organizations have publicly declared their adoption of select, trust- management, patient self-management, and other applications. enhancing AI risk mitigation practices that we described in the pre- vious section. At the same time, there is divergence between these AI risks and mitigation practices across the AI organizations about both what constitutes “ethical AI” and what implementation life cycle should be considered best practices for its realization. Poor execu- The recent NAM report on AI & Health described an AI implemen- tion of organizational self-governance can result in damage to the in- tation life cycle that can serve as an organizing schema to under- stitutional brand—and potentially open the organization to 69,70 stand specific AI risks and mitigation practices. Figure 1 illustrates liability. It has been argued that a society’s exclusive reliance on the 4-phase NAM AI implementation life cycle. Phase 1 defines clin- organizational self-governance processes is unlikely to effectively 71,72 ical and operational requirements, documents the current state, and ameliorate AI risks. identifies critical gaps to be filled by AI development. Phase 2 Relying on industry self-governance in defining and monitoring encompasses the development and validation of AI algorithms for a adherence can offer several advantages. It has the potential to act specific use case and context. Phase 3 focuses on organizational AI faster and with greater technical expertise than government in Maintain, Update, Identy or or De-Implement Reassess Needs Monitor Ongoing Describe Exisng Performance Workflows Define the Desired Target State Implement AI System in Target Seng Acquire or Develop AI System Figure 1. NAM AI/ML implementation life cycle. Adapted and reproduced from: National Academy of Medicine. 2020. NAM Special Publication: Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Reproduced with permission from the National Academy of Sciences, Courtesy of the National Academies Press, Washington, DC. PHASE 3 PHASE 4 1584 Journal of the American Medical Informatics Association, 2021, Vol. 28, No. 7 Table 1. AI risks and mitigation practices across the AI implementation cycle NAM Life cycle Risks Evidence-based practices 25,26 • • Phase 1: Needs Assessment Lack of integration of stakeholder perspec- User-centered design 16–22 27–29 tives & considerations Organizational readiness assessment • • Lack of clearly defined organizational val- Organizational prioritization process 23,24 ues & ethics User-centered workflow/change manage- 5,30–32 ment process 33–38 32,37,40,42–46 • • PHASE 2: Development Data bias Data transparency & reporting • • Lack of representative & equitable popula- Model provenance records 33,39 32,47–51 tion Promoting trust & explainability 37,40 52 • • Lack of data management Distributed model development No accounting for causal pathways • • PHASE 3: Implementation Lack of data encryption & privacy protec- Equitable/diverse workforce 53,54 38,46,47,56,57 tions Organizational implementation 13,47,58,59 • • Lack of secure hardware Organizational governance 60,61 • • Lack of oversight for responsible AI adop- Promote “human in the loop” practices 39,55 tion 47,62 33,63,64 • • PHASE 4: Maintenance Lack of algorithmic accountability Performance surveillance Organization surveillance governance defining and enforcing standards for products and services. It may by CMS in lieu of the agency inspecting these health organizations also be more insulated from partisan politics, which can lead to leg- itself. 65,74 islative or regulatory deadlocks. Increased reliance on “regulatory To counter growing mistrust of AI solutions, the AI/health oversight” through self-governance that is monitored by regulators industry could implement similar self-governance processes, includ- has been proposed as a modernized approach to regulation in the ing certification/accreditation programs targeting AI developers and age of rapidly evolving health technologies. Finally, in contrast to implementers. Such programs could promote standards and verify most government regulation, industry standards and enforcement adherence in a way that balances effective AI risk mitigation with mechanisms can reach across national jurisdictions to define and the need to continuously foster innovation. Moreover, as described transparently enforce standards for products and services with above in the instances of JC and NCQA, adherence to these stand- global reach, such as AI. ards could be equally expected of private and government-run AI There is precedence for industry self-governance in the US developers and implementers. healthcare sector. For example, a number of private sector health- care accreditation and certification programs (eg, Joint Commission [JC] and National Committee for Quality Assurance [NCQA] ac- PROMOTING AI RISK-MITIGATING PRACTICES creditation, ISO9000 certification, Baldridge awards, etc) indepen- dently define and verify adherence to practice standards by THROUGH CERTIFICATION/ACCREDITATION hospitals, health plans, and other healthcare organizations, with ac- Based on other certification and accreditation programs referenced countability for patient safety and healthcare quality. In these earlier, we next describe essential steps for the implementation of an efforts, private sector independent organizations, collaborate with AI industry self-governed certification or accreditation program. healthcare industry organizations (eg, health plans or hospitals) and These steps are summarized in Figure 2 and explained in more detail other experts to define relevant standards and performance metrics below: to improve healthcare safety and quality performance. These stand- ards and metrics are based on research evidence, when available, or Multistakeholder participation: Self-governance efforts requiring expert consensus when evidence is lacking or impractical to obtain. trust by a broad set of stakeholders must incorporate multiple Additionally, these organizations also assess adherence to standards perspectives. Stakeholders may include consumers/patients, clini- and measure performance through established, industry-vetted met- cians and institutional providers, healthcare administrators, rics. Due to the rigor and widespread use of these standards payors, AI developers, and relevant governmental agencies. throughout the private-sector healthcare industry, government-run Stakeholders could be effectively convened by an independent healthcare facilities (eg, Military Health Treatment facilities or Vet- third-party organization (eg, a nonprofit organization) that has erans Affairs Medical Centers) have adopted the same industry- expertise in the field and enjoys the trust of all stakeholders. For defined standards and performance metrics. Similarly, the Centers example, the Consumer Technology Association has suggested for Medicaid and Medicare Services (CMS) condition payment/re- potential standards for AI health solutions. A governing board imbursement of Medicare Advantage plans or healthcare facilities of this organization should include representatives of all critical on the adherence to NCQA and JC standards and performance met- stakeholder groups in order to be credible and ensure that all per- rics. CMS’s deeming authority grants JC and NCQA the ability to spectives are appropriately represented in a certification/accredi- demonstrate that their hospital and health plan clients meet or ex- tation program. Moreover, the organization’s governing board ceed CMS’s own standards for safety/quality. Once that has been should also provide guidance to multiple committees for demonstrated, JC or NCQA accreditation/certification is accepted specific, detailed elements of the overall program (eg, standard Journal of the American Medical Informatics Association, 2021, Vol. 28, No. 7 1585 Data Selecon & Management Algorithm Development & Performance Business Pracces & Policies Consumers/ Develop Consensus Paents Goals and Framework Clinicians Incorporate Healthcare Perspecves Administrators from Diverse Create Payors Stakeholder Market Groups Public Secon Demand for Agencies Operaonalize the Self- Evaluaon of Program Design Governance Program AI Developers & Program Effecveness Implementors Figure 2. Steps to implement an accreditation/certification program. development, performance metrics development, assessment/ac- clearly operationalized and have reasonable stability over creditation decisions, etc). time. For example, defining the certifiable/accreditable entity Such an independent third-party organization could be a well- at a product level may be challenging, as certain AI products known, already established organization in a particular country, may evolve in relatively short periods of time. Fundamental or an international organization with significant expertise that is product change over short periods may run counter to render- able to operate in multiple jurisdictions. For example, the Insti- ing meaningful certifications/accreditation decisions, which tute for Electrical and Electronics Engineers (IEEE) has more typically are meant to be valid for much longer periods (eg, than 417 000 members in over 160 countries and has long- 2–3 years) and based on an assessment at a particular point in standing experience in defining internationally adopted stand- time. ards. It recently launched a Global Initiative on Ethics of Auton- Define standards. A range of standards should be defined in omous and Intelligent Systems and issued an iterative playbook accordance with an overarching framework and program of standards and best practices called, “Ethically Aligned goals. In Figure 1, we have identified a framework which Design,” which is intended to inform governments, organiza- aligns evidence for groups of standards for each phase of the tions, businesses, and stakeholders around the world. To date, AI implementation life cycle. Within each phase, individual IEEE has not established a certification/accreditation program standard groups can be identified based on evidence that for AI developers and implementers. In addition, the World Eco- makes up the “group” of standards for that phase. When de- nomic Forum has also issued a model AI governance framework fining standards, it is also important to define specific ele- and assessment guide to be piloted around the globe. ments that an assessor must verify to determine if that Develop consensus goals and framework: A stakeholder- standard has been met. It is plausible that different sets of consented framework to enhance trust in AI and certification/ac- standards might apply to AI development organizations and creditation program goals must be developed to promote and AI-implementing organizations, respectively, based on their verify effective implementation of risk-mitigation practices. different range of activities along the AI implementation life Table 1 describes potential elements of such a framework that cycle. Organizations that both develop and implement AI sol- identifies AI risks and mitigation practices along an AI imple- utions (eg, a large health system with resources and know- mentation life cycle. The formulation of an enduring framework know to both develop and implement AI solutions) might be and overarching program goals will allow for a careful and regu- subject to a combined set of standards. lar evolution of specific standards and assessment methods that Measure adherence to standards and practices. A measure- is synchronized with the framework and program goals. ment system must be developed that allows for an indepen- Operationalize program design: Accreditation typically ensures dent verification of whether entities have met the standards. adherence to a wide range of diverse standards, whereas certifica- For instance, it must be determined what “evidence” is re- tion may refer to a smaller, narrower group of standards. For ex- quired to measure how a standard has been met (eg, review ample, AI accreditation could refer to adherence to all standards of submitted documents, calculation of submitted perfor- of a comprehensive framework, whereas certification could be mance measures, onsite observation, etc). Additionally, pro- achieved for only a subset. In either case, several elements will re- cesses must be implemented to ensure measurement quire careful consideration by an accreditation/certification en- methods are (1) valid (eg, assessment accurately verifies ad- tity, including the following: herence to a standard/practice); (2) reliable (eg, different Determine the certifiable/accreditable entity. Clear definitions reviewers reach the same result); and (3) the least burden- of the certifiable/accreditable entity must be identified. Should some. an organization, a specific program within the organization, Establish periodicity for recertification or accreditation.AI or a product developed by the organization, be certified or organizations, programs, methods, and products advance accredited? Should both AI developers and implementers be rapidly. A viable certification/accreditation program must certified/accredited and based on what group of standards? measure adherence to standards of a rapidly evolving indus- Moreover, the definition of the accreditable entity should be try. It also must strike the right balance between ensuring 1586 Journal of the American Medical Informatics Association, 2021, Vol. 28, No. 7 meaningful adherence standards without stifling ongoing in- standards in lieu of or as a complement to government regulation. novation and improvements over time. When effective, industry efforts of defining, adopting, and verifying Continuously review standards and methods. Standards and adherence to needed standards, can reduce the urgency of regulation assessment methods should be dynamic and adapt to evolving through the public sector and afford the opportunity to invest lim- practices. Additionally, certification/accreditation programs ited public resources otherwise. Industry self-governance has the may become more stringent and rigorous over time as experi- additional advantage of being able to establish standards for glob- ence increases with standards, assessment methods, and shift- ally distributed products and services across jurisdictions, reducing ing practices. the potential of inconsistent regulations, as well as the need and Create market demand: The likelihood of effective industry self- resources potentially required to achieve international harmoniza- governance depends on several factors. This includes, but is not tion of government regulations at a later point. limited to, the extent to which demand for firms’ products or If industry self-governance is lacking or relevant legislation or services relies on their brand quality or the probability of collec- political will already exists, and resources are available, government tive action by stakeholders to exert pressure on an industry to ad- agencies can reserve the right to institute their own AI programs. dress perceived risk. Verified adherence to best practices One example of a government-implemented program that incorpo- through certification/accreditation can improve AI developers’ rates several of the aforementioned elements is the US Food and and implementers’ brand through the ability to publicize adher- Drug Administration’s (FDA) software as a medical device (SaMD) ence to a “good housekeeping” seal of approval. For example, certification program. In this voluntary program, SaMD develop- being branded as a trusted developer and user of AI products or ers who rely on AI in their software are assessed and certified by services may increase demand from customers, including hospi- demonstrating an organizational culture of quality and excellence tals, health systems, health plans, physician practices, and indi- and a commitment to ongoing monitoring of software performance 79–82 viduals. A similar approach helped establish health plan in practice. However, AI-enabled SaMD represents only a small accreditation in the mid-1990s, when some large employers be- portion of AI solutions deployed in health and healthcare. Others gan demanding that health plan products they intended to pur- have suggested that additional legislation or efforts may be needed chase on behalf of their employees meet the criteria or standards to manage AI risks across a broader range of AI health solutions. for best practices established by NCQA. For example, it has been suggested that an Artificial Intelligence De- The public sector (ie, federal, state, and local entities), in their velopment Act (AIDA) is needed to task an organization or govern- roles as either payors or regulators, can similarly promote mar- ment agency with certifying the safety of a broad range of AI ket demand by giving preferential treatment to AI developers products/systems across industry sectors. and implementers adhering to private sector defined and imple- Approaches towards establishing greater accountability for AI mented accreditation/certification programs. To accomplish this, developers and implementers through industry self-governance pro- US government agencies could exercise deeming authority by grams or regulation do not obviate the need for addressing legal lia- recognizing private sector certification/accreditation programs bility. Unlike an accrediting organization or regulatory agency that ensure adherence to AI best practices, in lieu of submitting which would typically become active before harm from AI products their products or services separately to a public sector review. occurs, courts are reactive institutions as they apply tort law and ad- For example, US hospitals accredited by a private-sector organi- judicate liability in individual cases of alleged harm. To date, courts zation, such as the JC, can elect to be “deemed” as meeting CMS have not developed standards to specifically address who should be requirements by submitting to the review process of that private held legally responsible if an AI technology causes harm. Conse- sector accrediting entity. The public sector can also gradually in- quently, established legal theory would likely hold providers who crease the expectations of what private sector accrediting organi- rely on AI liable for malpractice in individual cases if it is proven zations must address to be deemed. that they owed (1) a professional duty to a patient; (2) that they Evaluation of program effectiveness. Finally, certification/accred- were in breach of such duty; (3) that that breach caused an injury; itation programs should be evaluated to ensure they meet their and (4) that there were resulting damages. In order to establish le- objective of increasing trust and adherence to best practices. gal links between certification and liability, AIDA could stipulate a Such evaluation can help determine if the program continues to certification scheme under which designers, manufacturers, sellers, meet critical private and public sector policy goals for more re- and implementers of certified AI programs would be subject to lim- sponsible AI development and implementation. If it is deter- ited tort liability, while uncertified programs that are offered for mined that the certification/accreditation program is not effective commercial sale or use would be subject to stricter joint and sever- in managing AI risks, industry or government can decide to able liability. A more in-depth exploration of legal liability is be- strengthen the program or market conditions that would make yond the scope of this article, but both liability and self-governance the program more effective. can promote greater accountability for ameliorating AI risks. INDUSTRY SELF-GOVERNANCE, REGULATION, CRITICAL CONSIDERATIONS FOR EFFECTIVE AND LIABILITY SELF-GOVERNANCE To date, the rise of AI has largely occurred in a regulatory and legis- lative vacuum. Apart from a few US states’ legislation regarding au- There are a number of critical success-factors, as well as risks, or tonomous vehicles and drones, few laws or regulations exist that potentially unintended consequences that need to be considered and specifically address the unique challenges raised by AI. mitigated when relying on industry self-governance as a complement Industries across the globe have at times defined, adopted, and to other legislative or regulatory efforts to foster responsible use of verified their adherence (eg, certification/accreditation) to beneficial AI. Journal of the American Medical Informatics Association, 2021, Vol. 28, No. 7 1587 In the US, the FDA is, as described earlier, currently offering cer- CONCLUSION tification for AI solutions, such as medical devices. However, the The advancement of AI is actively being promoted by the US govern- FDA’s current authority does not extend to most types of AI solu- 84–86 87 ment, governments and policy makers of other countries, and tions supporting health/healthcare needs such as population health supranational entities (eg, the European Union). However, signs management, patient/consumer self-management, research/develop- of a “techlash” and the acknowledgment of disconcerting AI-related ment, healthcare operations, etc. At the same time, some of the most risks and challenges are also abundant. prominent failures of AI solutions to deliver on their promise, there- Governmental management of public risks such as AI risks typi- fore jeopardizing trust, pertain to AI solutions not covered by the cally occurs in democratic societies through actions of the legisla- 2–5 FDA. This large segment of highly visible AI solutions in health/ tive, executive, and judicial branches of government. However, as healthcare may be an appropriate focus for self-governance efforts described, AI-specific legislation, regulation, or established legal to maintain trust. standards or case law largely do not exist worldwide—or they apply While self-governance efforts in health/healthcare have proven only to a narrow subset of AI health solutions. At the same time, to be successful in complementing legislative or regulatory efforts, many countries are hesitant to create national industrial policy several risks to effective self-governance should be managed care- approaches that may risk disadvantaging its industries during an in- fully. Generally speaking, self-governance will fall short when the tense global “competition” as the Fourth Industrial Revolution costs of self-governance to industry are higher than the alternatives. unfolds, dominated by smart technologies, AI, and digitalization. For example, success of self-governance may be less likely if the fol- In 2020, the US government issued a report on AI that directed fede- lowing conditions aren’t present or are not being created: a) the pub- ral agencies to avoid regulatory or nonregulatory actions that need- lic sector signaling pending legislative actions to establish greater lessly hamper AI innovation and growth. The report identified accountability for AI health solutions (eg, through expanded regula- ensuring trust in AI as the #1 principle of stewardship of AI while tory authority), and that government would accept self-governance encouraging reliance on voluntary frameworks and consensus programs in lieu of implementing its own programs to ensure ac- standards. countability; b) perceived public pressure (eg, through public media) The AI and healthcare industry could step in to manage AI risks on industry to create more trustworthy products; c) private and pub- through greater self-governance. We presented a framework to in- lic sector commitment to preferentially purchase AI solutions that crease trust in AI that maps known AI risks and their associated, have been certified/accredited; and d) a prominent initial (small) set mitigating, evidence-based practices to each phase of the AI imple- of organizations (AI developers/users) willing to collaborate under mentation life cycle. We also described how this framework could the auspices of an independent organization to define standards and inform the standard development for certification/accreditation pro- hold themselves accountable to them, thereby creating a market ex- grams for a broad spectrum of AI health solutions that is not cov- pectation for certification/accreditation for AI health solution devel- ered through current regulation. opers or implementers. Since many private companies, research Potential future legislation and regulation across the globe will, institutions, and public sector organizations have issued principles in the coming years, likely differ in terms of managing specific AI and guidelines for ethical AI, there may be a significant number of risks. However, encouraging the use of evidence-based risk mitiga- organizations interested in initiating such self-governance efforts. tion practices, promulgated through self-governance and certifica- Importantly, self-governance is likely only successful if all stake- tion and accreditation programs, could be effective and efficient holders have confidence that standards and verification methods across national jurisdictions in promoting and sustaining user trust were developed by appropriately balancing perspectives of consum- in AI, while staving off another AI Winter. ers/patients, clinicians, AI developers, AI users, and others. To that end, as described earlier, it is imperative that a third party, indepen- dent organization (eg, rather than a trade organization representing FUNDING 1 stakeholder group), is charged with the development of standards This research received no specific grant from any funding agency in the pub- and verification methods. Balanced development/oversight pro- lic, commercial or not-for-profit sectors. cesses, resulting in meaningful and operationally “achievable” per- formance standards, avoid the risk of standards/verification methods being perceived as self-serving for industry. However, standards need to be created that don’t stifle innovation by being un- AUTHOR CONTRIBUTIONS necessarily restrictive or by creating “high-costs” for accreditation/ JR and MEM developed the concept and designed the manuscript; KV and certification that may deter some AI developers from continuing to EJM provided key intellectual support, and EAK provided research support develop valuable AI health solutions. and helped edit the manuscript. To initiate the self-governance processes through an independent organization, start-up funding by the public sector or private-sector foundations or a group of organizations may be necessary. Such DATA AVAILABILITY STATEMENT funding could support the independent organization in convening There are no new data associated with this article. No new data were gener- stakeholders and defining an initial set of standards and verification ated or analyzed in support of this research. methods. Ongoing maintenance of standards and certification/ac- creditation program operations would likely need to be funded by fees levied on those organizations seeking certification/accreditation. CONFLICT OF INTEREST STATEMENT Such a model is analog to the funding/business models of other health/healthcare certification/accreditation efforts. None declared. 1588 Journal of the American Medical Informatics Association, 2021, Vol. 28, No. 7 20. Solomon DH, Rudin RS. Digital health technologies: opportunities and REFERENCES challenges in rheumatology. Nat Rev Rheumatol 2020; 16 (9): 525–35. 1. Roski J, Chapman W, Heffner J, et al. How artificial intelligence is chang- 21. Graham S, Depp C, Lee EE, et al. Artificial intelligence for mental health ing health and health care. In: Matheny M, Israni ST, Ahmed M, Whicher and mental illnesses: an overview. Curr Psychiatry Rep 2019; 21 (11): D, eds, Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. NAM Special Publication. Washington, DC: National 22. Liyanage H, Liaw ST, Jonnagaddala J, et al. Artificial intelligence in pri- Academy of Medicine; 2019. https://nam.edu/wp-content/uploads/2019/ mary health care: perceptions, issues, and challenges. Yearb Med Inform 12/AI-in-Health-Care-PREPUB-FINAL.pdf Accessed 22 Oct. 2020 2019; 28 (1): 41–6. 2. Obermeyer Z, Powers B, Vogeli C, et al. Dissecting racial bias in an algo- 23. van de Poel I. Embedding Values in Artificial Intelligence (AI) Systems. rithm used to manage the health of populations. Science 2019; 366 Minds and Machines. 2020. https://link.springer.com/article/10.1007/ (6464): 447–53. s11023-020-09537-4 Accessed October 5, 2020 3. Johnson CY. Racial bias in a medical algorithm favors white patients over 24. Gerhards H, Weber K, Bittner U, Fangerau H. Machine Learning Health- sicker black patients. The Washington Post. October 24, 2019. https:// care Applications (ML-HCAs) are no stand-alone systems but part of an www.washingtonpost.com/health/2019/10/24/racial-bias-medical-algo- ecosystem - a broader ethical and health technology assessment approach rithm-favors-white-patients-over-sicker-black-patients/ Accessed Octo- is needed. Am J Bioeth 2020; 20 (11): 46–8. ber 22, 2020 25. Filice RW, Ratwani RM. The case for user-centered artificial intelligence 4. Ross C. IBM’s Watson supercomputer recommended ‘unsafe and incor- in radiology. Radiology 2020; 2 (3): e190095. rect’ cancer treatments, internal documents show. Statþ News June 25, 26. Barda AJ, Horvat CM, Hochheiser H. A qualitative research framework 2018. https://www.statnews.com/2018/07/25/ibm-watson-recommended- for the design of user-centered displays of explanations for machine learn- unsafe-incorrect-treatments/ Accessed 22 October, 2020 ing model predictions in healthcare. BMC Med Inform Decis Mak 2020; 5. Beede E, Baylor E, Hersch F, et al. A human-centered evaluation of a deep 20 (1): 257. learning system deployed in clinics for the detection of diabetic retinopa- 27. Miake-Lye IM, Delevan DM, Ganz DA, et al. Unpacking organizational thy. In: Proceedings of the 2020 CHI Conference on Human Factors in readiness for change: an updated systematic review and content analysis Computing Systems. Honolulu, HI: Association for Computing Machin- of assessments. BMC Health Serv Res 2020; 20 (1): 106. ery; 2020: 1–12. 28. Alami H, Lehoux P, Denis J-L, et al. Organizational readiness for artificial 6. Huber P. Safety and the second best: the hazards of public risk manage- intelligence in health care: insights for decision-making and practice. J ment in the courts. Columbia Law Rev 1985; 85 (2): 277–337. Health Organ Manag2020; 35 (1): 106–14. 7. Knight Foundation. Techlash? America’s Growing Concern with Major 29. Williams I. Organizational readiness for innovation in health care: some Technology Companies. 2020. https://knightfoundation.org/reports/tech- lessons from the recent literature. Health Serv Manage Res 2011; 24 (4): lash- americas-growing-concern-with-major- technology-companies/ 213–8. Accessed March 16, 2021 30. Cai CJ, Reif E, Hegde N, et al. Human-Centered Tools for Coping with 8. Kasthurirathne SN, Vest JR, Menachemi N, et al. Assessing the capacity Imperfect Algorithms During Medical Decision-Making. In: Proceedings of social determinants of health data to augment predictive models identi- of the 2019 CHI Conference on Human Factors in Computing Systems. fying patients in need of wraparound social services. J Am Med Inform New York: Association for Computing Machinery; 2019: 1–14; Glasgow, Assoc 2018; 25 (1): 47–53. Scotland, Uk. doi:10.1145/3290605.3300234 9. Contreras I, Vehi J. Artificial intelligence for diabetes management and de- 31. Cai CJ, Winter S, Steiner D, Wilcox L, Terry M. “Hello AI”: uncovering cision support: literature review. J Med Internet Res 2018; 20 (5): e10775. the onboarding needs of medical practitioners for human-AI collaborative 10. Zieger A. Will Payers Use AI to Do Prior Authorization? And Will These decision-making. Proc Acm Hum-Comput Interact 2019; 3 (CSCW): AIs Make Things Better? Healthcare IT Today. December 27th, 2018. 1–24. Article 104. https://www.healthcareittoday.com/2018/12/27/will-payers-use-ai-to-do- 32. Asan O, Bayrak AE, Choudhury A. Artificial intelligence and human trust prior-authorization-and-will-these-ais-make-things-better/ Accessed 22 in healthcare: focus on clinicians. J Med Internet Res 2020; 22 (6): October, 2020 e15154. 11. Zitnik M, Agrawal M, Leskovec J. Modeling polypharmacy side effects 33. Kelly CJ, Karthikesalingam A, Suleyman M, et al. Key challenges for deliv- with graph convolutional networks. Bioinformatics 2018; 34 (13): ering clinical impact with artificial intelligence. BMC Med 2019; 17 (1): i457–66. 12. Fitzpatrick KK, Darcy A, Vierhile M. Delivering cognitive behavior ther- 34. Ntoutsi E, Fafalios P, Gadiraju U, et al. Bias in data-driven artificial intelli- apy to young adults with symptoms of depression and anxiety using a fully gence systems—An introductory survey. WIREs Data Mining Knowl Dis- automated conversational agent (Woebot): a randomized controlled trial. cov 2020; 10 (3): e1356. JMIR Ment Health 2017; 4 (2): e19. 35. Gianfrancesco MA, Tamang S, Yazdany J, et al. Potential biases in ma- 13. Reddy S, Allan S, Coghlan S, Cooper P. A governance model for the appli- chine learning algorithms using electronic health record data. JAMA In- cation of AI in health care. J Am Med Inform Assoc 2020; 27 (3): 491–7. tern Med 2018; 178 (11): 1544–7. 14. Butcher J, Beridze I. What is the state of artificial intelligence governance 36. Lee NT, Resnick P, Barton G. Algorithmic bias detection and mitigation: globally? RUSI J 2019; 164 (5-6): 88–96. Best practices and policies to reduce consumer harms. Center for Technol- 15. Mayer F, Gereffi G. Regulation and economic globalization: prospects ogy Innovation, Brookings. 2019. https://www.brookings.edu/research/al- and limits of private governance. Bus Polit 2010; 12 (3): 1–25. gorithmic-bias-detection-and-mitigation-best-practices-and-policies-to- 16. Johnson KB, Wei WQ, Weeraratne D, et al. Precision medicine, AI, and reduce-consumer-harms/ Accessed 22 October, 2020 the future of personalized health care [published online ahead of print, 37. Hernandez-Boussard T, Bozkurt S, Ioannidis JPA, Shah NH. MINIMAR 2020 Sep 22]. Clin Transl Sci 2021; 14 (1): 86–93. (MINimum Information for Medical AI Reporting): Developing reporting 17. Wall J, Krummel T. The digital surgeon: how big data, automation, and standards for artificial intelligence in health care. J Am Med Inform Assoc artificial intelligence will change surgical practice. J Pediatr Surg 2020; 2020; 27 (12): 2011–5. 55S: 47–50. 38. DeCamp M, Lindvall C. Latent bias and the implementation of arti- 18. Pedersen M, Verspoor K, et al. Artificial intelligence for clinical decision ficial intelligence in medicine. J Am Med Inform Assoc. 2020; 27 support in neurology. Brain Commun 2020; 2 (2): fcaa096. (12): 2020–3. 19. Ting DSW, Peng L, Varadarajan AV, et al. Deep learning in ophthalmol- 39. Tzachor A, Whittlestone J, Sundaram L, hEigeartaigh SO. Artificial intelli- ogy: the technical and clinical considerations. Prog Retin Eye Res 2019; gence in a crisis needs ethics with urgency. Nat Mach Intell 2020; 2 (7): 72: 100759. 365–6. Journal of the American Medical Informatics Association, 2021, Vol. 28, No. 7 1589 40. Norgeot B, Quer G, Beaulieu-Jones BK, et al. Minimum information 60. Holzinger A, Plass M, Kickmeier-Rust M, et al. Interactive machine learn- about clinical artificial intelligence modeling: the MI-CLAIM checklist. ing: experimental evidence for the human in the algorithmic loop. Appl Nat Med 2020; 26 (9): 1320–4. doi:10.1038/s41591-020-1041-y Intell July 2019; 49 (7): 2401–14. 41. Richens JG, Lee CM, Johri S. Improving the accuracy of medical diagnosis 61. Lee D, et al. A human-in-the-loop perspective on AutoML: milestones and with causal machine learning. Nat Commun 2020; 11 (1): 3923. the road ahead. IEEE Data Eng. Bull 2019; 42: 59–70. 42. Crowley RJ, Tan YJ, Ioannidis JPA. Empirical assessment of bias in ma- 62. Diakopoulos N. Algorithmic Accountability Reporting: on the Investiga- chine learning diagnostic test accuracy studies. J Am Med Inform Assoc tion of Black Boxes. 2014. http://academiccommons.columbia.edu, 2020; 27 (7): 1092–101. doi:10.7916/D8ZK5TW2 43. Rivera SC, Liu X, Chan AW, Denniston AK, Calvert MJ, SPIRIT-AI and 63. Subbaswamy A, Saria S. From development to deployment: dataset shift, CONSORT-AI Working Group. Guidelines for clinical trial protocols for causality, and shift-stable models in health AI. Biostatistics 2020; 21 (2): interventions involving artificial intelligence: the SPIRIT-AI Extension. 345–52. BMJ 2020; 370: m3210. 64. Davis SE, Greevy RA, Lasko TA, Walsh CG, Matheny ME. Comparison 44. Liu X, Cruz RS, Moher D, Calvert MJ, Denniston AK, SPIRIT-AI and of prediction model performance updating protocols: using a data-driven CONSORT-AI Working Group. Reporting guidelines for clinical trial testing procedure to guide updating. AMIA Annu Symp Proc 2019; 2019: reports for interventions involving artificial intelligence: the CONSORT- 1002–10. AI extension. Lancet Digit Health 2020; 2 (10): e537–48. 65. Eaneff S, Obermeyer Z, Butte AJ. The case for algorithmic stewardship of 45. Andaur Navarro CL, Damen J, Takada T, et al. Protocol for a systematic artificial intelligence and machine learning technologies. JAMA 2020; 324 review on the methodological and reporting quality of prediction model (14): 1397. studies using machine learning techniques. BMJ Open 2020; 10 (11): 66. Scherer MU. Regulating artificial intelligence systems: risks, challenges, e038832. competencies, and strategies. Harvard J Law Technol 2015; 29: 353. 46. Mongan J, Moy L, Charles E, Kahn J. Checklist for Artificial Intelligence 67. Maurer SM. The new self-governance: a theoretical framework. Bus Polit in Medical Imaging (CLAIM): a guide for authors and reviewers. Radiol- 2017; 19 (1): 41–67. ogy 2020; 2 (2): e200029. 68. Jobin A, Ienca M, Vayena E. The global landscape of AI ethics guidelines. 47. Hunt R, McKelvey F. Algorithmic regulation in media and cultural policy: Nat Mach Intell 2019; 1 (9): 389–99. a framework to evaluate barriers to accountability. J Inform Policy 2019; 69. “Exclusive: Google Cancels AI Ethics Board in Response to Outcry”. 9: 307–35. Vox. 2019. https://www.vox.com/future-perfect/2019/4/4/18295933/ 48. Payrovnaziri SN, Chen Z, Rengifo-Moreno P, et al. Explainable google-cancels-ai-ethics-board Accessed September 23, 2020 artificial intelligence models using real-world electronic health record 70. D’Onfro J. Google Scraps Its AI Ethics Board Less Than Two Weeks After data: a systematic scoping review. J Am Med Inform Assoc 2020; 27 (7): Launch in the Wake of Employee Protest. Forbes. https://www.forbes. 1173–85. com/sites/jilliandonfro/2019/04/04/google-cancels-its-ai-ethics-board- 49. Zednik C. Solving the black box problem: A normative framework for ex- less-than-two-weeks-after-launch-in-the-wake-of-employee-protest/ plainable artificial intelligence. Philos. Technol 2019; https://doi.org/10. Accessed January 8, 2021 1007/s13347-019-00382-7. 71. Putting Responsible AI Into Practice. https://sloanreview.mit.edu/article/ 50. Ribeiro MT, Singh S, Guestrin C. “Why should i trust you?”: explaining putting-responsible-ai-into-practice/ Accessed January 8, 2021 the predictions of any classifier. In: Proceedings of the 22nd ACM 72. Tiku N. Google hired Timnit Gebru to be an outspoken critic of unethical SIGKDD International Conference on Knowledge Discovery and Data AI. Then she was fired for it. Washington Post. 2020. https://www.wash- Mining. San Francisco, CA: Association for Computing Machinery; ingtonpost.com/technology/2020/12/23/google-timnit-gebru-ai-ethics/ 2016:1135–44. Accessed January 8, 2021 51. Phillips PJ, Hahn AC, Fontana PC, et al. (2020). Four Principles of Ex- 73. Clark J, Gillian K. Hadfield. “Regulatory Markets for AI Safety.” arXiv plainable Artificial Intelligence (Draft). https://doi.org/10.6028/NIST.IR. preprint arXiv:2001.00078 2019. 8312-draft Accessed 22 October, 2020 74. Scott M. “In 2020, Global ‘Techlash’ Will Move from Words to Action.” 52. Jiang X, Wu Y, Marsolo K, et al. Development of a web service for analy- POLITICO. 2019, https://www.politico.eu/article/tech-policy-competi- sis in a distributed network. EGEMS (Wash DC) 2014; 2 (1): 1053. tion-privacy-facebook-europe-techlash/ Accessed 22 Oct 2020 53. Rastogi N, et al. Security and privacy of performing data analytics in the 75. The Use of Artificial Intelligence in Health Care: Trustworthiness (ANSI/ cloud: a three-way handshake of technology, policy, and management. J CTA-2090). Consumer Technology Association. https://shop.cta.tech/ Inform Policy 2015; 5: 129–54. products/the-use-of-artificial-intelligence-in-healthcare-trustworthiness- 54. Shi S, He D, Li L, et al. Applications of blockchain in ensuring the security cta-2090 Accessed March 16, 2021 and privacy of electronic health record systems: a survey. Comput Secur 76. Model Artificial Intelligence Governance Framework and Assessment 2020; 97: 101966–20. Guide. World Economic Forum. https://www.weforum.org/projects/ 55. Mudgal KS, Das N. The ethical adoption of artificial intelligence in radiol- model-ai-governance-framework/ Accessed March 16, 2021 ogy. BJR Open 2020; 2 (1): 20190020. 77. When Employers Choose Health Plans: Do NCQA Accreditation and 56. Strohm L, Hehakaya C, Ranschaert ER, Boon WPC, Moors EHM. Imple- HEDIS Data Count j Commonwealth Fund. 1998. https://www.common- mentation of artificial intelligence (AI) applications in radiology: hinder- wealthfund.org/publications/fund-reports/1998/aug/when-employers- ing and facilitating factors. Eur Radiol 2020; 30 (10): 5525–32. choose-health-plans-do-ncqa-accreditation-and Accessed January 8, 2021 57. Petitgand C, Motulsky A, Denis JL, R egis C. Investigating the barriers to 78. CMS to Strengthen Oversight of Medicare’s Accreditation Organizations. physician adoption of an artificial intelligence- based decision support sys- CMS Newsroom. 2018. https://www.cms.gov/newsroom/press-releases/ tem in emergency care: an interpretative qualitative study. Stud Health cms-strengthen-oversight-medicares-accreditation-organizations Accessed Technol Inform 2020; 270: 1001–5. October 14, 2020 58. Roski J, Gillingham BL, Just E, Barr S, Sohn E, Sakarcan K. Implementing 79. US Department of Health and Human Services, US Food and Drug Ad- and scaling artificial intelligence solutions: considerations for policy mak- ministration, Center for Devices & Radiological Health. Developing the ers and decision makers. Health Aff Blog. September 18, 2018; doi: Software Precertification Program: Summary of Learnings and Ongoing 10.1377/hblog20180917.283077. https://www.healthaffairs.org/do/10. Activities. 2020. https://www.fda.gov/media/142107/download Accessed 1377/hblog20180917.283077/full/ Accessed April 8, 2021. October 22, 2020 59. Sohn E, Roski J, Escaravage S, Maloy K. Four lessons in the adoption of 80. US Department of Health and Human Services, US Food and Drug Ad- machine learning in health care. Health Aff Blog. May 9, 2017; doi: ministration, Center for Devices & Radiological Health, Digital Health 10.1377/hblog20170509.059985. https://www.healthaffairs.org/do/10. Program. Digital Health Innovation Action Plan. 2017. https://www.fda. 1377/hblog20170509.059985/full/ Accessed April 8, 2021. gov/media/106331/download Accessed Oct 22, 2020 1590 Journal of the American Medical Informatics Association, 2021, Vol. 28, No. 7 81. US Department of Health and Human Services, US Food and Drug Ad- 86. Executive Office of the President of the United States; Artificial ministration, Center for Devices & Radiological Health. Developing a Intelligence Research & Development Interagency Working Software Precertification Program: A Working Model (v1.0 – January Group. 2016–2019 Progress Report: Advancing Artificial Intelli- 2019). 2019. https://www.fda.gov/media/119722/download Accessed Oc- gence R&D. 2019. https://www.nitrd.gov/pubs/AI-Research-and-De- tober 22, 2020 velopment-Progress-Report-2016-2019.pdf Accessed October 22, 82. US Department of Health and Human Services, US Food and Drug Ad- 2020 ministration, Center for Devices & Radiological Health. Proposed Regu- 87. KI Strategie. https://www.ki-strategie-deutschland.de/home.html latory Framework for Modifications to Artificial Intelligence/Machine Accessed September 23, 2020 Learning (AI/ML)-Based Software as a Medical Device (SaMD) - Discus- 88. White Paper on Artificial Intelligence: A European Approach to Ex- sion Paper and Request for Feedback. 2019. https://www.fda.gov/media/ cellence and Trust. European Commission - European Commission. 122535/download Accessed 22 October, 2020 2020. https://ec.europa.eu/info/publications/white-paper-artificial-intelli- 83. Bal BS. An introduction to medical malpractice in the United States. Clin gence-european-approach-excellence-and-trust_en Accessed September Orthop Relat Res 2009; 467 (2): 339–47. 23, 2020 84. Office of the President of the United States. Maintaining American 89. The Fourth Industrial Revolution: what it means and how to respond. leadership in artificial intelligence. Executive Order 13859. 2019. World Economic Forum. 2016. https://www.weforum.org/agenda/2016/ https://www.whitehouse.gov/presidential-actions/executive-order-main- 01/the-fourth-industrial-revolution-what-it-means-and-how-to-respond/ taining-american-leadership-artificial-intelligence/ Accessed 22 October, Accessed March 16, 2021 2020 90. Memorandum M-21-06, Guidance for Regulation of Artificial Intelligence 85. Intelligence SCoA. The National Artificial Intelligence Research and De- Applications. Executive Office of the President, Office of Management velopment Strategic Plan: 2019 Update. 2019. https://www.nitrd.gov/ Budget (OMB). 2020. https://www.whitehouse.gov/wp-content/uploads/ pubs/National-AI-RD-Strategy-2019.pdf Accessed Oct 22, 2020 2020/11/M-21-06.pdf Accessed March 16, 2021.
Journal of the American Medical Informatics Association – Oxford University Press
Published: Apr 25, 2021
Keywords: artificial intelligence/ethics; artificial intelligence/organization and administration; certification; accreditation; policy making
You can share this free article with as many people as you like with the url below! We hope you enjoy this feature!
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.