Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ai 90% Match Research Paper ML Researchers,Deep Learning Engineers,Theoretical ML Scientists 2 weeks ago

Memorization-Compression Cycles Improve Generalization

large-language-models › training-methods
📄 Abstract

Abstract: We prove theoretically that generalization improves not only through data scaling but also by compressing internal representations. To operationalize this insight, we introduce the Information Bottleneck Language Modeling (IBLM) objective, which reframes language modeling as a constrained optimization problem: minimizing representation entropy subject to optimal prediction performance. Empirically, we observe an emergent memorization-compression cycle during LLM pretraining, evidenced by oscillation positive/negative gradient alignment between cross-entropy and Matrix-Based Entropy (MBE), a measure of representation entropy. This pattern closely mirrors the predictive-compressive trade-off prescribed by IBLM and also parallels the biological alternation between awake learning and sleep consolidation. Motivated by this observation, we propose Gated Phase Transition (GAPT), a training algorithm that adaptively switches between memorization and compression phases. When applied to GPT-2 pretraining on FineWeb dataset, GAPT reduces MBE by 50% and improves cross-entropy by 4.8%. GAPT improves OOD generalizatino by 35% in a pretraining task on arithmetic multiplication. In a setting designed to simulate catastrophic forgetting, GAPT reduces interference by compressing and separating representations, achieving a 97% improvement in separation - paralleling the functional role of sleep consolidation.
Authors (1)
Fangyuan Yu
Submitted
May 13, 2025
arXiv Category
cs.LG
arXiv PDF

Key Contributions

Theoretically proves that generalization improves via representation compression and introduces the IBLM objective. Empirically observes an emergent memorization-compression cycle during LLM pretraining and proposes GAPT, a training algorithm that adaptively switches between these phases to improve generalization.

Business Value

Leads to more robust and reliable AI models that generalize better to unseen data, reducing the need for massive datasets and improving performance in real-world applications.