In this paper we consider the problem of batch learning with log-loss, in a stochastic setting where given the data features, the outcome is generated by an unknown distribution from a class of models. Utilizing the minimax theorem and information-theoretical tools, we came up with the minimax universal learning solution, a redundancy capacity theorem and an upper bound on the performance of the optimal solution. The resulting universal learning solution is a mixture over the models in the considered class. Furthermore, we get a better bound on the generalization error that decays as $O(\log N/N)$, where $N$ is the sample size, instead of $O(\sqrt{\log N/N})$ which is commonly attained in statistical learning theory for the empirical risk minimizer.
Universal Batch Learning with Log-Loss
Published 2018 in International Symposium on Information Theory
ABSTRACT
PUBLICATION RECORD
- Publication year
2018
- Venue
International Symposium on Information Theory
- Publication date
2018-06-01
- Fields of study
Mathematics, Computer Science
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-12 of 12 references · Page 1 of 1
CITED BY
Showing 1-25 of 25 citing papers · Page 1 of 1