Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: One can see deep-learning models as compositions of functions within the
so-called tame geometry. In this expository note, we give an overview of some
topics at the interface of tame geometry (also known as o-minimality),
optimization theory, and deep learning theory and practice. To do so, we
gradually introduce the concepts and tools used to build convergence guarantees
for stochastic gradient descent in a general nonsmooth nonconvex, but tame,
setting. This illustrates some ways in which tame geometry is a natural
mathematical framework for the study of AI systems, especially within Deep
Learning.