Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: The theory of training deep networks has become a central question of modern
machine learning and has inspired many practical advancements. In particular,
the gradient descent (GD) optimization algorithm has been extensively studied
in recent years. A key assumption about GD has appeared in several recent
works: the \emph{GD map is non-singular} -- it preserves sets of measure zero
under preimages. Crucially, this assumption has been used to prove that GD
avoids saddle points and maxima, and to establish the existence of a computable
quantity that determines the convergence to global minima (both for GD and
stochastic GD). However, the current literature either assumes the
non-singularity of the GD map or imposes restrictive assumptions, such as
Lipschitz smoothness of the loss (for example, Lipschitzness does not hold for
deep ReLU networks with the cross-entropy loss) and restricts the analysis to
GD with small step-sizes. In this paper, we investigate the neural network map
as a function on the space of weights and biases. We also prove, for the first
time, the non-singularity of the gradient descent (GD) map on the loss
landscape of realistic neural network architectures (with fully connected,
convolutional, or softmax attention layers) and piecewise analytic activations
(which includes sigmoid, ReLU, leaky ReLU, etc.) for almost all step-sizes. Our
work significantly extends the existing results on the convergence of GD and
SGD by guaranteeing that they apply to practical neural network settings and
has the potential to unlock further exploration of learning dynamics.
Authors (2)
Alexandru Crăciun
Debarghya Ghoshdastidar
Submitted
October 28, 2025
Key Contributions
This paper investigates the non-singularity of the Gradient Descent (GD) map for neural networks with piecewise analytic activations. It aims to relax restrictive assumptions in prior work, providing theoretical guarantees for GD avoiding saddle points and converging to global minima, even for non-Lipschitz losses like those in deep ReLU networks.
Business Value
Provides a stronger theoretical foundation for understanding and guaranteeing the convergence of deep learning models, which can lead to more reliable and predictable training processes.