How level of explanation detail affects human performance in interpretable intelligent systems: A study on explainable fact checking
Citations Over TimeTop 16% of 2021 papers
Abstract
Abstract Explainable artificial intelligence (XAI) systems aim to provide users with information to help them better understand computational models and reason about why outputs were generated. However, there are many different ways an XAI interface might present explanations, which makes designing an appropriate and effective interface an important and challenging task. Our work investigates how different types and amounts of explanatory information affect user ability to utilize explanations to understand system behavior and improve task performance. The presented research employs a system for detecting the truthfulness of news statements. In a controlled experiment, participants were tasked with using the system to assess news statements as well as to learn to predict the output of the AI. Our experiment compares various levels of explanatory information to contribute empirical data about how explanation detail can influence utility. The results show that more explanation information improves participant understanding of AI models, but the benefits come at the cost of time and attention needed to make sense of the explanation.
Related Papers
- → Socially shared affect: Shared affect, affect sharing, and affective processing in groups.(2023)15 cited
- → Lexical Affect Sensing: Are Affect Dictionaries Necessary to Analyze Affect?(2007)28 cited
- → Self-Regulation of Affect–Health Behavior Relations(2018)7 cited
- → Assessing the Initial Pleasantness for Fading Affect, Fixed Affect, Flourishing Affect, and Flexible Affect Events(2016)8 cited
- → The Influence of Affect on Health Decisions(2016)4 cited