We analyze the Dawid-Rissanen prequential maximum likelihood codes relative to one-parameter exponential family models M. If data are i.i.d. according to an (essentially) arbitrary P, then the redundancy grows at rate 1/2 c ln n. We show that c = σ 2 1 /σ 2 2 , where σ 2 1 is the variance of P, and σ 2 2 is the variance of the distribution M* ∈ M that is closest to P in KL divergence. This shows that prequential codes behave quite differently from other important universal codes such as the 2-part MDL, Shtarkov and Bayes codes, for which c = 1. This behavior is undesirable in an MDL model selection setting.
Asymptotic Log-Loss of Prequential Maximum Likelihood Codes
Published 2005 in Annual Conference Computational Learning Theory
ABSTRACT
PUBLICATION RECORD
- Publication year
2005
- Venue
Annual Conference Computational Learning Theory
- Publication date
2005-02-01
- Fields of study
Mathematics, Computer Science
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-27 of 27 references · Page 1 of 1
CITED BY
Showing 1-21 of 21 citing papers · Page 1 of 1