Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
This paper provides a theoretical framework, the 'φ Curve', to understand generalization in machine learning by using norm-based capacity measures instead of parameter counts. It precisely characterizes how estimator norm concentration governs test error for random features models, revealing a phase transition but no double descent, recovering classical U-shaped behavior.
Deepens the fundamental understanding of why and how deep learning models generalize, which can guide the development of more robust and predictable AI systems.