Access the full text.
Sign up today, get DeepDyve free for 14 days.
Todd Kulesza, S. Stumpf, M. Burnett, Irwin Kwan (2012)
Tell me more?: the effects of mental model soundness on personalizing an intelligent agentProceedings of the SIGCHI Conference on Human Factors in Computing Systems
W. Clancey (1987)
Knowledge-based tutoring: the GUIDON program
Lamia Alam, Shane Mueller (2021)
Assessing Clustering Methods to Establish Reliability and Consensus in Card Sorting Tasks
D. Berry, D. Broadbent (1987)
Explanation and Verbalization in a Computer-Assisted Search TaskQuarterly Journal of Experimental Psychology, 39
Tim Miller (2017)
Explanation in Artificial Intelligence: Insights from the Social SciencesArtif. Intell., 267
Gary Klein, Jennifer Phillips, Erica Rall, Deborah Peluso (2007)
A Data–Frame Theory of Sensemaking
E. Shortliffe, B. Buchanan, E. Feigenbaum (1979)
Knowledge engineering for medical decision making: A review of computer-based clinical decision aidsProceedings of the IEEE, 67
Arash Shaban-Nejad, Martin Michalowski, D. Buckeridge (2020)
Explainability and Interpretability: Keys to Deep Medicine
D. Besnard, David Greathead, G. Baxter (2004)
When mental models go wrong: co-occurrences in dynamic, critical systemsInt. J. Hum. Comput. Stud., 60
Todd Kulesza, S. Stumpf, M. Burnett, Weng-Keen Wong, Yann Riche, Travis Moore, Ian Oberst, Amber Shinsel, Kevin McIntosh (2010)
Explanatory Debugging: Supporting End-User Debugging of Machine-Learned Programs
B. Crandall, Gary Klein, R. Hoffman (2006)
Working Minds: A Practitioner's Guide to Cognitive Task Analysis
P. Johnson, James Moen, W. Thompson (1988)
Garden Path Errors in Diagnostic Reasoning
Cecilia Panigutti, A. Perotti, D. Pedreschi (2020)
Doctor XAI: an ontology-based approach to black-box sequential data classification explanationsProceedings of the 2020 Conference on Fairness, Accountability, and Transparency
Dónal Doyle, A. Tsymbal, P. Cunningham (2003)
A Review of Explanation and Explanation in Case-Based Reasoning
Mary Dzindolet, S. Peterson, Regina Pomranky, L. Pierce, Hall Beck (2003)
The role of trust in automation relianceInt. J. Hum. Comput. Stud., 58
Shane Mueller, R. Hoffman, W. Clancey, Abigail Emrey, Gary Klein (2019)
Explanation in Human-AI Systems: A Literature Meta-Review, Synopsis of Key Ideas and Publications, and Bibliography for Explainable AIArXiv, abs/1902.01876
(2013)
Package ‘cluster’. Dosegljivo Na
W. Clancey (1981)
The Epistemology of a Rule-Based Expert System - A Framework for ExplanationArtif. Intell., 20
D. Hilton, J. McClure, Ben Slugoski (2005)
Counterfactuals, conditionals and causality: A social psychological perspective
Gary Klein, Robert Hoffman, Shane Mueller, E. Newsome (2021)
Modeling the Process by Which People Try to Explain Complex Things to OthersJournal of Cognitive Engineering and Decision Making, 15
Aayush Bansal, Ali Farhadi, Devi Parikh (2014)
Towards Transparent Systems: Semantic Characterization of Failure Modes
J. Goguen, J. Weiner, C. Linde (1983)
Reasoning and Natural ExplanationInt. J. Man Mach. Stud., 19
Lamia Alam (2020)
INVESTIGATING THE IMPACT OF EXPLANATION ON REPAIRING TRUST IN AI DIAGNOSTIC SYSTEMS FOR RE-DIAGNOSIS
S. Khemlani, P. Johnson-Laird (2010)
Explanations make inconsistencies harder to detect, 32
Brian Lim, A. Dey (2009)
Assessing demand for intelligibility in context-aware applicationsProceedings of the 11th international conference on Ubiquitous computing
C. Ellis (1989)
Expert Knowledge and Explanation. The Knowledge--Language Interface
S. Lauritsen, Mads Kristensen, Mathias Olsen, Morten Larsen, K. Lauritsen, Marianne Jørgensen, Jeppe Lange, B. Thiesson (2019)
Explainable artificial intelligence model to predict acute critical illness from electronic health recordsNature Communications, 11
Or Biran, Courtenay Cotton (2017)
Explanation and Justification in Machine Learning : A Survey Or
Joseph Halpern, J. Pearl (2000)
Causes and explanations: A structural-model approach
Joseph Halpern, J. Pearl (2001)
Causes and Explanations: A Structural-Model Approach. Part II: ExplanationsThe British Journal for the Philosophy of Science, 56
C. Borgman (1999)
The user's mental model of an information retrieval system
S. Amershi, D. Chickering, S. Drucker, Bongshin Lee, P. Simard, Jina Suh (2015)
ModelTracker: Redesigning Performance Analysis Tools for Machine LearningProceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems
K. Baier (2016)
The Uses Of Argument
D. Kahneman, C. Varey (1990)
Propensities and counterfactuals: The loser that almost wonJournal of Personality and Social Psychology, 59
V. Patel, E. Shortliffe, M. Stefanelli, Peter Szolovits, M. Berthold, R. Bellazzi, A. Abu-Hanna (2009)
The coming of age of artificial intelligence in medicineArtificial intelligence in medicine, 46 1
R. Hoffman, Shane Mueller, Gary Klein, Jordan Litman (2018)
Metrics for Explainable AI: Challenges and ProspectsArXiv, abs/1812.04608
B. Stafford (2006)
Working MindsPhi Delta Kappan Magazine, 88
L. Militello, R. Hutton (1998)
Applied cognitive task analysis (ACTA): a practitioner's toolkit for understanding cognitive task demands.Ergonomics, 41 11
Lisa Hendricks, Zeynep Akata, Marcus Rohrbach, Jeff Donahue, B. Schiele, Trevor Darrell (2016)
Generating Visual ExplanationsArXiv, abs/1603.08507
E. Shortliffe (1974)
A rule-based computer program for advising physicians regarding antimicrobial therapy selection
Maria-Florina Balcan (2000)
Interactive Machine Learning Mustafa Bilgic
(1994)
September). Context needs in cooperative building of explanations
(1989)
The effect of user models on the production of explanations dins C . Ellis ( ed . )
AI systems are increasingly being developed to provide the first point of contact for patients. These systems are typically focused on question-answering and integrating chat systems with diagnostic algorithms, but are likely to suffer from many of the same deficiencies in explanation that have plagued medical diagnostic systems since the 1970s (Shortliffe, 1979). To provide better guidance about how such systems should approach explanations, we report on an interview study in which we identified explanations that physicians used in the context of re-diagnosis or a change in diagnosis. Seven current and former physicians with a variety of specialties and experience were recruited to take part in the interviews. Several high-level observations were made by reviewing the interview notes. Nine broad categories of explanation emerged from the thematic analysis of the explanation contents. We also present these in a diagnosis meta-timeline that encapsulates many of the commonalities we saw across diagnoses during the interviews. Based on the results, we provided some design recommendations to consider for developing diagnostic AI systems. Altogether, this study suggests explanation strategies, approaches, and methods that might be used by medical diagnostic AI systems to improve user trust and satisfaction with these systems.
Journal of Cognitive Engineering and Decision Making – SAGE
Published: Jun 1, 2022
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.