Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: We study the Hamiltonian flow for optimization (HF-opt), which simulates the
Hamiltonian dynamics for some integration time and resets the velocity to $0$
to decrease the objective function; this is the optimization analogue of the
Hamiltonian Monte Carlo algorithm for sampling. For short integration time,
HF-opt has the same convergence rates as gradient descent for minimizing
strongly and weakly convex functions. We show that by randomizing the
integration time in HF-opt, the resulting randomized Hamiltonian flow (RHF)
achieves accelerated convergence rates in continuous time, similar to the rates
for the accelerated gradient flow. We study a discrete-time implementation of
RHF as the randomized Hamiltonian gradient descent (RHGD) algorithm. We prove
that RHGD achieves the same accelerated convergence rates as Nesterov's
accelerated gradient descent (AGD) for minimizing smooth strongly and weakly
convex functions. We provide numerical experiments to demonstrate that RHGD is
competitive with classical accelerated methods such as AGD across all settings
and outperforms them in certain regimes.