Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Training deep neural networks is a structured optimization problem, because
the parameters are naturally represented by matrices and tensors rather than by
vectors. Under this structural representation, it has been widely observed that
gradients are low-rank and Hessians are approximately block diagonal. These
structured properties are crucial for designing efficient optimization
algorithms, but are not utilized by many current popular optimizers like Adam.
In this paper, we present a novel optimization algorithm ASGO that capitalizes
on these properties by employing a preconditioner that is adaptively updated
using structured gradients. By a fine-grained theoretical analysis, ASGO is
proven to achieve superior convergence rates compared to existing structured
gradient methods. Based on this convergence theory, we further demonstrate that
ASGO can benefit from low-rank gradients and block diagonal Hessians. We also
discuss practical modifications of ASGO and empirically verify ASGO's
effectiveness on language model tasks. Code is available at
https://github.com/infinity-stars/ASGO.
Authors (7)
Kang An
Yuxing Liu
Rui Pan
Yi Ren
Shiqian Ma
Donald Goldfarb
+1 more
Key Contributions
JSON parse error: Unexpected token i in JSON at position 53128