Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ml 80% Match Theoretical Research Paper Machine learning theorists,Optimization researchers,Researchers focused on generalization bounds 1 week ago

Stability and Sharper Risk Bounds with Convergence Rate $\tilde{O}(1/n^2)$

ai-safety › robustness
📄 Abstract

Abstract: Prior work (Klochkov $\&$ Zhivotovskiy, 2021) establishes at most $O\left(\log (n)/n\right)$ excess risk bounds via algorithmic stability for strongly-convex learners with high probability. We show that under the similar common assumptions -- - Polyak-Lojasiewicz condition, smoothness, and Lipschitz continous for losses -- - rates of $O\left(\log^2(n)/n^2\right)$ are at most achievable. To our knowledge, our analysis also provides the tightest high-probability bounds for gradient-based generalization gaps in nonconvex settings.
Authors (4)
Bowei Zhu
Shaojie Li
Mingyang Yi
Yong Liu
Submitted
October 13, 2024
arXiv Category
cs.LG
arXiv PDF

Key Contributions

Establishes tighter excess risk bounds of $O(\log^2(n)/n^2)$ for strongly-convex learners using algorithmic stability, improving upon prior $O(\log(n)/n)$ bounds. It also provides the tightest high-probability bounds for generalization gaps in non-convex settings under common assumptions.

Business Value

Provides stronger theoretical guarantees for the performance and generalization of machine learning algorithms, which can lead to the development of more reliable and robust AI systems.