Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ai 95% Match theoretical research paper AI/ML researchers,LLM developers,theoreticians 2 weeks ago

The Coverage Principle: How Pre-training Enables Post-Training

large-language-models › training-methods
📄 Abstract

Abstract: Language models demonstrate remarkable abilities when pre-trained on large text corpora and fine-tuned for specific tasks, but how and why pre-training shapes the success of the final model remains poorly understood. Notably, although pre-training success is often quantified by cross-entropy loss, cross-entropy can be a poor predictor of downstream performance. Instead, we provide a theoretical perspective on this relationship through the lens of \emph{coverage}, which quantifies the probability mass the pre-trained model places on high-quality responses and which is necessary and sufficient for post-training and test-time scaling methods such as Best-of-N to succeed. Our main results develop an understanding of \emph{the coverage principle}, a phenomenon whereby next-token prediction (more generally, maximum likelihood) implicitly optimizes toward a model with good coverage. In particular, we uncover a mechanism that explains the power of coverage in predicting downstream performance: \emph{coverage generalizes faster than cross-entropy}, avoiding spurious dependence on problem-dependent parameters such as the sequence length. We also study practical algorithmic interventions with provable benefits for improving coverage, including (i) model/checkpoint selection procedures, (ii) gradient normalization schemes, and (iii) test-time decoding strategies.
Authors (8)
Fan Chen
Audrey Huang
Noah Golowich
Sadhika Malladi
Adam Block
Jordan T. Ash
+2 more
Submitted
October 16, 2025
arXiv Category
stat.ML
arXiv PDF

Key Contributions

Introduces the 'coverage principle,' a theoretical framework explaining why pre-training on large text corpora enables successful fine-tuning. Coverage quantifies the probability mass placed on high-quality responses and is shown to be necessary and sufficient for post-training and test-time scaling methods, providing a better predictor of downstream performance than cross-entropy.

Business Value

Offers a deeper understanding of LLM training, enabling more efficient development and selection of models that generalize better to downstream tasks, ultimately saving computational resources and improving performance.