Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

A Naturalistic Investigation of Trust, AI, and Intelligence Work

A Naturalistic Investigation of Trust, AI, and Intelligence Work Artificial Intelligence (AI) is often viewed as the means by which the intelligence community will cope with increasing amounts of data. There are challenges in adoption, however, as outputs of such systems may be difficult to trust, for a variety of factors. We conducted a naturalistic study using the Critical Incident Technique (CIT) to identify which factors were present in incidents where trust in an AI technology used in intelligence work (i.e., the collection, processing, analysis, and dissemination of intelligence) was gained or lost. We found that explainability and performance of the AI were the most prominent factors in responses; however, several other factors affected the development of trust. Further, most incidents involved two or more trust factors, demonstrating that trust is a multifaceted phenomenon. We also conducted a broader thematic analysis to identify other trends in the data. We found that trust in AI is often affected by the interaction of other people with the AI (i.e., people who develop it or use its outputs), and that involving end users in the development of the AI also affects trust. We provide an overview of key findings, practical implications for design, and possible future areas for research. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Journal of Cognitive Engineering and Decision Making SAGE

A Naturalistic Investigation of Trust, AI, and Intelligence Work

Loading next page...
 
/lp/sage/a-naturalistic-investigation-of-trust-ai-and-intelligence-work-BKxU2XZIqo

References (71)

Publisher
SAGE
Copyright
Copyright © 2022, Human Factors and Ergonomics Society
ISSN
1555-3434
eISSN
2169-5032
DOI
10.1177/15553434221103718
Publisher site
See Article on Publisher Site

Abstract

Artificial Intelligence (AI) is often viewed as the means by which the intelligence community will cope with increasing amounts of data. There are challenges in adoption, however, as outputs of such systems may be difficult to trust, for a variety of factors. We conducted a naturalistic study using the Critical Incident Technique (CIT) to identify which factors were present in incidents where trust in an AI technology used in intelligence work (i.e., the collection, processing, analysis, and dissemination of intelligence) was gained or lost. We found that explainability and performance of the AI were the most prominent factors in responses; however, several other factors affected the development of trust. Further, most incidents involved two or more trust factors, demonstrating that trust is a multifaceted phenomenon. We also conducted a broader thematic analysis to identify other trends in the data. We found that trust in AI is often affected by the interaction of other people with the AI (i.e., people who develop it or use its outputs), and that involving end users in the development of the AI also affects trust. We provide an overview of key findings, practical implications for design, and possible future areas for research.

Journal

Journal of Cognitive Engineering and Decision MakingSAGE

Published: Dec 1, 2022

Keywords: intelligence analysis; human automation interaction; military; contextual inquiry

There are no references for this article.