The susceptibility of modern machine learning classifiers to adversarial examples has motivated theoretical results suggesting that these might be unavoidable. However, these results can be too general to be applicable to natural data distributions. Indeed, humans are quite robust for tasks involving vision. This apparent conflict motivates a deeper dive into the question: Are adversarial examples truly unavoidable? In this work, we theoretically demonstrate that a key property of the data distribution -- concentration on small-volume subsets of the input space -- determines whether a robust classifier exists. We further demonstrate that, for a data distribution concentrated on a union of low-dimensional linear subspaces, utilizing structure in data naturally leads to classifiers that enjoy data-dependent polyhedral robustness guarantees, improving upon methods for provable certification in certain regimes.
Adversarial Examples Might be Avoidable: The Role of Data Concentration in Adversarial Robustness
Ambar Pal,Jeremias Sulam,René Vidal
Published 2023 in Neural Information Processing Systems
ABSTRACT
PUBLICATION RECORD
- Publication year
2023
- Venue
Neural Information Processing Systems
- Publication date
2023-09-28
- Fields of study
Computer Science
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-71 of 71 references · Page 1 of 1
CITED BY
Showing 1-9 of 9 citing papers · Page 1 of 1