Human Reliance on Machine Learning Models When Performance Feedback is Limited: Heuristics and Risks

Zhuoran Lu,Ming Yin

Published 2021 in International Conference on Human Factors in Computing Systems

ABSTRACT

This paper addresses an under-explored problem of AI-assisted decision-making: when objective performance information of the machine learning model underlying a decision aid is absent or scarce, how do people decide their reliance on the model? Through three randomized experiments, we explore the heuristics people may use to adjust their reliance on machine learning models when performance feedback is limited. We find that the level of agreement between people and a model on decision-making tasks that people have high confidence in significantly affects reliance on the model if people receive no information about the model’s performance, but this impact will change after aggregate-level model performance information becomes available. Furthermore, the influence of high confidence human-model agreement on people’s reliance on a model is moderated by people’s confidence in cases where they disagree with the model. We discuss potential risks of these heuristics, and provide design implications on promoting appropriate reliance on AI.

PUBLICATION RECORD

  • Publication year

    2021

  • Venue

    International Conference on Human Factors in Computing Systems

  • Publication date

    2021-05-06

  • Fields of study

    Computer Science, Psychology

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-61 of 61 references · Page 1 of 1

CITED BY

Showing 1-100 of 140 citing papers · Page 1 of 2