Access the full text.
Sign up today, get DeepDyve free for 14 days.
Sarah Sherwood, K. Neville, A. McLean, Melissa Walwanis, Amy Bolton (2020)
Integrating New Technology into the Complex System of Air Combat Training
T. Johnson (2008)
National Intelligence Strategy of the United States of America
D. Woods (2003)
Decomposing Automation : Apparent Simplicity , Real Complexity
Virginia Braun, Victoria Clarke
Please Scroll down for Article Qualitative Research in Psychology Using Thematic Analysis in Psychology
Yibing Xie, Nichakorn Pongsakornsathien, A. Gardi, R. Sabatini (2021)
Explanation of Machine-Learning Solutions in Air-Traffic ManagementAerospace
C. Nemeth, Gary Klein (2011)
The Naturalistic Decision Making Perspective
(2011)
His research interests include naturalistic decision making, crowdsourcing and collective intelligence, and the sociotechnical aspects of AI/ML in intelligence analysis
S. Rebensky, Kendall Carmody, C. Ficke, Daniela Nguyen, M. Carroll, Jessica Wildman, Amanda Thayer (2021)
Whoops! Something Went Wrong: Errors, Trust, and Trust Repair Strategies in Human Agent Teaming
John Lee, Katrina See (2004)
Trust in Automation: Designing for Appropriate RelianceHuman Factors: The Journal of Human Factors and Ergonomics Society, 46
T. Roth-Berghofer, J. Cassens (2005)
Mapping Goals and Kinds of Explanations to the Knowledge Containers of Case-Based Reasoning Systems
Gary Klein, R. Calderwood, Anne Clinton-Cirocco (2010)
Rapid Decision Making on the Fire Ground: The Original Study Plus a PostscriptJournal of Cognitive Engineering and Decision Making, 4
R. Hoffman, Gary Klein, Shane Mueller (2018)
Explaining Explanation For “Explainable Ai”Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 62
M. Madsen, S. Gregor (2000)
Measuring Human-Computer Trust
N. King (2004)
Using templates in the thematic analysis of text
R. Johnson (1997)
Examining the Validity Structure of Qualitative ResearchEducation 3-13, 118
Nathan Mcneese, R. Hoffman, M. McNeese, E. Patterson, Nancy Cooke, Gary Klein (2015)
The Human Factors of Intelligence AnalysisProceedings of the Human Factors and Ergonomics Society Annual Meeting, 59
S. Trent, E. Patterson, David Woods (2007)
Challenges for Cognition in Intelligence AnalysisJournal of Cognitive Engineering and Decision Making, 1
M. Banerjee, M. Capozzoli, Laura McSweeney, D. Sinha (1999)
Beyond kappa: A review of interrater agreement measuresCanadian Journal of Statistics, 27
Gary Klein, Robert Hoffman, Shane Mueller, E. Newsome (2021)
Modeling the Process by Which People Try to Explain Complex Things to OthersJournal of Cognitive Engineering and Decision Making, 15
Michael Lee, R. Valisetty, Alexander Breuer, Kelly Kirk, Brian Panneton, Scott Brown (2018)
Current and Future Applications of Machine Learning for the US Army
B. Muir, N. Moray (1996)
Trust in automation. Part II. Experimental studies of trust and human intervention in a process control simulation.Ergonomics, 39 3
Vanessa Volz, K. Majchrzak, M. Preuss (2018)
A Social Science-based Approach to Explanations for (Game) AI2018 IEEE Conference on Computational Intelligence and Games (CIG)
K. Siau, Weiyu Wang (2018)
Building Trust in Artificial Intelligence, Machine Learning, and Robotics, 31
B. Wong, N. Kodagoda (2015)
How Analysts ThinkProceedings of the Human Factors and Ergonomics Society Annual Meeting, 59
Kevin Hallgren (2012)
Computing Inter-Rater Reliability for Observational Data: An Overview and Tutorial.Tutorials in quantitative methods for psychology, 8 1
T. Sheridan (2012)
Human Supervisory Control
A. Kamaraj, John Lee (2020)
Using Machine Learning to Aid in Data Classification: Classifying Occupation Compatibility with Highly Automated VehiclesErgonomics in Design: The Quarterly of Human Factors Applications, 29
Paul Symon, Arzan Tarapore (2015)
Defense Intelligence Analysis in the Age of Big Data
(2011)
Sensemaking : A transformative paradigm
L. Nowell, J. Norris, Deborah White, N. Moules (2017)
Thematic AnalysisInternational Journal of Qualitative Methods, 16
August Capiola, H. Baxter, M. Pfahler, Christopher Calhoun, P. Bobko (2020)
Swift Trust in Ad Hoc Teams: A Cognitive Task Analysis of Intelligence Operators in Multi-Domain Command and Control ContextsJournal of Cognitive Engineering and Decision Making, 14
L. Coventry (2010)
Human Factors
Richard Skarbez, Nicholas Polys, J. Ogle, Chris North, D. Bowman (2019)
Immersive Analytics: Theory and Research AgendaFrontiers in Robotics and AI, 6
R. Hoffman, S. Henderson, B. Moon, D. Moore, Jordan Litman (2011)
Reasoning difficulty in analytical activityTheoretical Issues in Ergonomics Science, 12
(2019)
Trustworthy AI – why does it matter
R. Heuer (1999)
Psychology of Intelligence Analysis
M. Lehto, T. House, J. Papastavrou (2000)
Foundations for an Empirically Determined Scale of Trust in Automated SystemsInternational Journal of Cognitive Ergonomics, 4
R. Cadario, Chiara Longoni, Carey Morewedge (2021)
Understanding, explaining, and utilizing medical artificial intelligenceNature Human Behaviour, 5
Stephen Dorton, Rob Hall (2021)
Collaborative Human-AI Sensemaking for Intelligence Analysis
(2005)
How Many Interviews Are Enough? An Experiment with Data Saturation and Variability
Gary Klein, Andrea Jarosz (2011)
A Naturalistic Study of InsightJournal of Cognitive Engineering and Decision Making, 5
X. Yang, Christopher Schemanske, Christine Searle (2021)
Toward Quantifying Trust Dynamics: How People Adjust Their Trust After Moment-to-Moment Interaction With AutomationHuman Factors, 65
R. Gutzwiller, J. Reeder (2020)
Dancing With Algorithms: Interaction Creates Greater Preference and Trust in Machine-Learned BehaviorHuman Factors: The Journal of Human Factors and Ergonomics Society, 63
Lee Butterfield, William Borgen, N. Amundson, Asa-Sophia Maglio (2005)
Fifty years of the critical incident technique: 1954-2004 and beyondQualitative Research, 5
Frode Sørmo, J. Cassens, A. Aamodt (2005)
Explanation in Case-Based Reasoning–Perspectives and GoalsArtificial Intelligence Review, 24
Kevin Hoff, Masooda Bashir (2015)
Trust in AutomationHuman Factors: The Journal of Human Factors and Ergonomics Society, 57
J. Flanagan (1954)
The critical incident technique.Psychological bulletin, 51 4
(2021)
AI offers to change every aspect
Tobias Riasanow, H. Ye, Suparna Goswami (2015)
Generating Trust in Online Consumer Reviews through Signaling: An Experimental Study2015 48th Hawaii International Conference on System Sciences
Lingxue Yang, Hongrun Wang, L. Deleris (2021)
What Does It Mean to Explain? A User-Centered Study on AI Explainability
(2021)
Trustable AI: A critical challenge for naval intelligence
N. Diakopoulos (2016)
Accountability in algorithmic decision makingCommunications of the ACM, 59
B. Wong (2014)
How Analysts Think (?): Early Observations2014 IEEE Joint Intelligence and Security Informatics Conference
Howard Tinsley, D. Weiss (1975)
Interrater reliability and agreement of subjective judgmentsJournal of Counseling Psychology, 22
Shane Mueller, Elizabeth Veinott, R. Hoffman, Gary Klein, Lamia Alam, T. Mamun, W. Clancey (2021)
Principles of Explanation in Human-AI SystemsArXiv, abs/2102.04972
Kristin Schaefer, Jessie Chen, J. Szalma, P. Hancock (2016)
A Meta-Analysis of Factors Influencing the Development of Trust in AutomationHuman Factors: The Journal of Human Factors and Ergonomics Society, 58
David Watson (2019)
The Rhetoric and Reality of Anthropomorphism in Artificial IntelligenceMinds and Machines, 29
UI = User Interface References
M. Lorente, Elena López, Laura Florez, Agapito Espino, Jose Martinez, Araceli Miguel (2021)
Explaining Deep Learning-Based Driver ModelsApplied Sciences, 11
R. Parasuraman, V. Riley (1997)
Humans and Automation: Use, Misuse, Disuse, AbuseHuman Factors: The Journal of Human Factors and Ergonomics Society, 39
N. Balfe, S. Sharples, John Wilson (2018)
Understanding Is Key: An Analysis of Factors Pertaining to Trust in a Real-World Automation SystemHuman Factors, 60
Gary Klein, David Woods, J. Bradshaw, R. Hoffman, P. Feltovich (2004)
Ten Challenges for Making Automation a "Team Player" in Joint Human-Agent ActivityIEEE Intell. Syst., 19
B. Moon, R. Hoffman (2005)
How Might "Transformational" Technologies and Concepts be Barriers to Sensemaking in Intelligence Analysis?
Stephen Dorton, I. Frommer, Teena Garrison (2019)
A Theoretical Model for Assessing Information Validity from Multiple Observers2019 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA)
K. Vogel, Gwendolynne Reid, Christopher Kampe, Paul Jones (2021)
The impact of AI on intelligence analysis: tackling issues of collaboration, algorithmic transparency, accountability, and managementIntelligence and National Security, 36
Abuse (2008)
Humans and Automation : Use , Misuse , Disuse ,
P. Angelov, Xiaowei Gu (2018)
Toward Anthropomorphic Machine LearningComputer, 51
Amina Adadi, M. Berrada (2018)
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)IEEE Access, 6
P. Angelov, E. Soares, Richard Jiang, Nicholas Arnold, Peter Atkinson (2021)
Explainable artificial intelligence: an analytical reviewWiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 11
H. Roff, D. Danks (2018)
“Trust but Verify”: The Difficulty of Trusting Autonomous Weapons SystemsJournal of Military Ethics, 17
Jacob Cohen (1960)
A Coefficient of Agreement for Nominal ScalesEducational and Psychological Measurement, 20
Artificial Intelligence (AI) is often viewed as the means by which the intelligence community will cope with increasing amounts of data. There are challenges in adoption, however, as outputs of such systems may be difficult to trust, for a variety of factors. We conducted a naturalistic study using the Critical Incident Technique (CIT) to identify which factors were present in incidents where trust in an AI technology used in intelligence work (i.e., the collection, processing, analysis, and dissemination of intelligence) was gained or lost. We found that explainability and performance of the AI were the most prominent factors in responses; however, several other factors affected the development of trust. Further, most incidents involved two or more trust factors, demonstrating that trust is a multifaceted phenomenon. We also conducted a broader thematic analysis to identify other trends in the data. We found that trust in AI is often affected by the interaction of other people with the AI (i.e., people who develop it or use its outputs), and that involving end users in the development of the AI also affects trust. We provide an overview of key findings, practical implications for design, and possible future areas for research.
Journal of Cognitive Engineering and Decision Making – SAGE
Published: Dec 1, 2022
Keywords: intelligence analysis; human automation interaction; military; contextual inquiry
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.