Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ml 70% Match Research Paper Machine learning researchers,Optimization experts,Deep learning practitioners 1 week ago

How do simple rotations affect the implicit bias of Adam?

large-language-models › training-methods
📄 Abstract

Abstract: Adaptive gradient methods such as Adam and Adagrad are widely used in machine learning, yet their effect on the generalization of learned models -- relative to methods like gradient descent -- remains poorly understood. Prior work on binary classification suggests that Adam exhibits a ``richness bias,'' which can help it learn nonlinear decision boundaries closer to the Bayes-optimal decision boundary relative to gradient descent. However, the coordinate-wise preconditioning scheme employed by Adam renders the overall method sensitive to orthogonal transformations of feature space. We show that this sensitivity can manifest as a reversal of Adam's competitive advantage: even small rotations of the underlying data distribution can make Adam forfeit its richness bias and converge to a linear decision boundary that is farther from the Bayes-optimal decision boundary than the one learned by gradient descent. To alleviate this issue, we show that a recently proposed reparameterization method -- which applies an orthogonal transformation to the optimization objective -- endows any first-order method with equivariance to data rotations, and we empirically demonstrate its ability to restore Adam's bias towards rich decision boundaries.
Authors (3)
Adela DePavia
Vasileios Charisopoulos
Rebecca Willett
Submitted
October 27, 2025
arXiv Category
cs.LG
arXiv PDF

Key Contributions

Demonstrates that Adam's coordinate-wise preconditioning makes it sensitive to orthogonal transformations of the feature space, potentially reversing its generalization advantage over gradient descent. Even small rotations can cause Adam to converge to a suboptimal linear decision boundary, unlike gradient descent. The paper suggests a reparameterization technique to alleviate this issue.

Business Value

Improves the understanding of fundamental optimization algorithms, leading to more robust and reliable deep learning models in practice.