When consumers need more interpretability of artificial intelligence (AI) recommendations? The effect of decision-making domains
Citations Over TimeTop 20% of 2023 papers
Abstract
Due to the “black-box’ nature of artificial intelligence (AI) recommendations, interpretability is critical to the consumer experience of human-AI interaction. Unfortunately, improving the interpretability of AI recommendations is technically challenging and costly. Therefore, there is an urgent need for the industry to identify when the interpretability of AI recommendations is more likely to be needed. This study defines the construct of Need for Interpretability (NFI) of AI recommendations and empirically tests consumers’ need for interpretability of AI recommendations in different decision-making domains. Across two experimental studies, we demonstrate that consumers do indeed have a need for interpretability toward AI recommendations, and that the need for interpretability is higher in utilitarian domains than in hedonic domains. This study would help companies to identify the varying need for interpretability of AI recommendations in different application scenarios.
Related Papers
- → Innovative approaches to addressing the tradeoff between interpretability and accuracy in ship fuel consumption prediction(2023)56 cited
- → Demystifying Black Box Models with Neural Networks for Accuracy and Interpretability of Supervised Learning(2018)1 cited
- → Hybrid Predictive Model: When an Interpretable Model Collaborates with a Black-box Model(2019)11 cited
- → A Theory of Diagnostic Interpretation in Supervised Classification(2018)1 cited