Recent years have seen a surge of interest in learning high-level causal representations from low-level image pairs under interventions. Yet, existing efforts are largely limited to simple synthetic settings that are far away from real-world problems. In this paper, we present Causal Triplet, a causal representation learning benchmark featuring not only visually more complex scenes, but also two crucial desiderata commonly overlooked in previous works: (i) an actionable counterfactual setting, where only certain object-level variables allow for counterfactual observations whereas others do not; (ii) an interventional downstream task with an emphasis on out-of-distribution robustness from the independent causal mechanisms principle. Through extensive experiments, we find that models built with the knowledge of disentangled or object-centric representations significantly outperform their distributed counterparts. However, recent causal representation learning methods still struggle to identify such latent structures, indicating substantial challenges and opportunities for future work. Our code and datasets will be available at https://sites.google.com/view/causaltriplet.
Causal Triplet: An Open Challenge for Intervention-centric Causal Representation Learning
Yuejiang Liu,Alexandre Alahi,Chris Russell,Max Horn,Dominik Zietlow,B. Scholkopf,Francesco Locatello
Published 2023 in CLEaR
ABSTRACT
PUBLICATION RECORD
- Publication year
2023
- Venue
CLEaR
- Publication date
2023-01-12
- Fields of study
Computer Science
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-52 of 52 references · Page 1 of 1
CITED BY
Showing 1-28 of 28 citing papers · Page 1 of 1