The uptake of artificial intelligence-based applications raises concerns about the fairness and transparency of AI behaviour. Consequently, the Computer Science community calls for the involvement of the general public in the design and evaluation of AI systems. Assessing the fairness of individual predictors is an essential step in the development of equitable algorithms. In this study, we evaluate the effect of two common visualisation techniques (text-based and scatterplot) and the display of the outcome information (i.e., ground-truth) on the perceived fairness of predictors. Our results from an online crowdsourcing study (N = 80) show that the chosen visualisation technique significantly alters people’s fairness perception and that the presented scenario, as well as the participant’s gender and past education, influence perceived fairness. Based on these results we draw recommendations for future work that seeks to involve non-experts in AI fairness evaluations.
Effect of Information Presentation on Fairness Perceptions of Machine Learning Predictors
N. V. Berkel,Jorge Gonçalves,D. Russo,S. Hosio,M. Skov
Published 2021 in International Conference on Human Factors in Computing Systems
ABSTRACT
PUBLICATION RECORD
- Publication year
2021
- Venue
International Conference on Human Factors in Computing Systems
- Publication date
2021-05-06
- Fields of study
Computer Science
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-61 of 61 references · Page 1 of 1
CITED BY
Showing 1-98 of 98 citing papers · Page 1 of 1