Machine learning techniques are increasingly used for high‐stakes decision‐making, such as college admissions, loan attribution, or recidivism prediction. Thus, it is crucial to ensure that the models learnt can be audited or understood by human users, do not create or reproduce discrimination or bias and do not leak sensitive information regarding their training data. Indeed, interpretability, fairness, and privacy are key requirements for the development of responsible machine learning, and all three have been studied extensively during the last decade. However, they were mainly considered in isolation, while in practice they interplay with each other, either positively or negatively. In this survey paper, we review the literature on the interactions between these three desiderata. More precisely, for each pairwise interaction, we summarize the identified synergies and tensions. These findings highlight several fundamental theoretical and empirical conflicts, while also demonstrating that jointly considering these different requirements is challenging when one aims at preserving a high level of utility. To solve this issue, we also discuss possible conciliation mechanisms, showing that a careful design can enable to successfully handle these different concerns in practice.
Taming the Triangle: On the Interplays Between Fairness, Interpretability, and Privacy in Machine Learning
Julien Ferry,Ulrich Aïvodji,Sébastien Gambs,Marie-José Huguet,Mohamed Siala
Published 2025 in International Conference on Climate Informatics
ABSTRACT
PUBLICATION RECORD
- Publication year
2025
- Venue
International Conference on Climate Informatics
- Publication date
2025-08-01
- Fields of study
Computer Science
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
CITED BY
Showing 1-4 of 4 citing papers · Page 1 of 1