Explainability (XAI) has matured in recent years to provide more human-centered explanations of AI-based decision systems. While static explanations remain predominant, interactive XAI has gathered momentum to support the human cognitive process of explaining. However, the evidence regarding the benefits of interactive explanations is unclear. In this paper, we map existing findings by conducting a detailed scoping review of 48 empirical studies in which interactive explanations are evaluated with human users. We also create a classification of interactive techniques specific to XAI and group the resulting categories according to their role in the cognitive process of explanation: "selective", "mutable" or "dialogic". We identify the effects of interactivity on several user-based metrics. We find that interactive explanations improve perceived usefulness and performance of the human+AI team but take longer. We highlight conflicting results regarding cognitive load and overconfidence. Lastly, we describe underexplored areas including measuring curiosity or learning or perturbing outcomes.
On Selective, Mutable and Dialogic XAI: a Review of What Users Say about Different Types of Interactive Explanations
Astrid Bertrand,Tiphaine Viard,Rafik Belloum,James R. Eagan,Winston Maxwell
Published 2023 in International Conference on Human Factors in Computing Systems
ABSTRACT
PUBLICATION RECORD
- Publication year
2023
- Venue
International Conference on Human Factors in Computing Systems
- Publication date
2023-04-19
- Fields of study
Computer Science
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
CITED BY
Showing 1-58 of 58 citing papers · Page 1 of 1