Conditional Value at Risk (CVaR) is a family of "coherent risk measures" which generalize the traditional mathematical expectation. Widely used in mathematical finance, it is garnering increasing interest in machine learning, e.g., as an alternate approach to regularization, and as a means for ensuring fairness. This paper presents a generalization bound for learning algorithms that minimize the CVaR of the empirical loss. The bound is of PAC-Bayesian type and is guaranteed to be small when the empirical CVaR is small. We achieve this by reducing the problem of estimating CVaR to that of merely estimating an expectation. This then enables us, as a by-product, to obtain concentration inequalities for CVaR even when the random variable in question is unbounded.
PAC-Bayesian Bound for the Conditional Value at Risk
Zakaria Mhammedi,Benjamin Guedj,R. C. Williamson
Published 2020 in Neural Information Processing Systems
ABSTRACT
PUBLICATION RECORD
- Publication year
2020
- Venue
Neural Information Processing Systems
- Publication date
2020-06-26
- Fields of study
Mathematics, Computer Science
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-47 of 47 references · Page 1 of 1
CITED BY
Showing 1-23 of 23 citing papers · Page 1 of 1