Recent advances in depth-recurrent language models show that recurrence can decouple train-time compute and parameter count from test-time compute. In this work, we study how to convert existing pretrained non-recurrent language models into depth-recurrent models. We find that using a curriculum of recurrences to increase the effective depth of the model over the course of training preserves performance while reducing total computational cost. In our experiments, on mathematics, we observe that converting pretrained models to recurrent ones results in better performance at a given compute budget than simply post-training the original non-recurrent language model.
Teaching Pretrained Language Models to Think Deeper with Retrofitted Recurrence
Sean McLeish,Ang Li,John Kirchenbauer,Dayal Singh Kalra,Brian R. Bartoldson,B. Kailkhura,Avi Schwarzschild,Jonas Geiping,Tom Goldstein,Micah Goldblum
Published 2025 in arXiv.org
ABSTRACT
PUBLICATION RECORD
- Publication year
2025
- Venue
arXiv.org
- Publication date
2025-11-10
- Fields of study
Mathematics, Computer Science
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-85 of 85 references · Page 1 of 1
CITED BY
Showing 1-6 of 6 citing papers · Page 1 of 1