Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: We develop a stochastic approximation framework for learning nonlinear
operators between infinite-dimensional spaces utilizing general Mercer
operator-valued kernels. Our framework encompasses two key classes: (i) compact
kernels, which admit discrete spectral decompositions, and (ii) diagonal
kernels of the form $K(x,x')=k(x,x')T$, where $k$ is a scalar-valued kernel and
$T$ is a positive operator on the output space. This broad setting induces
expressive vector-valued reproducing kernel Hilbert spaces (RKHSs) that
generalize the classical $K=kI$ paradigm, thereby enabling rich structural
modeling with rigorous theoretical guarantees. To address target operators
lying outside the RKHS, we introduce vector-valued interpolation spaces to
precisely quantify misspecification error. Within this framework, we establish
dimension-free polynomial convergence rates, demonstrating that nonlinear
operator learning can overcome the curse of dimensionality. The use of general
operator-valued kernels further allows us to derive rates for intrinsically
nonlinear operator learning, going beyond the linear-type behavior inherent in
diagonal constructions of $K=kI$. Importantly, this framework accommodates a
wide range of operator learning tasks, ranging from integral operators such as
Fredholm operators to architectures based on encoder-decoder representations.
Moreover, we validate its effectiveness through numerical experiments on the
two-dimensional Navier-Stokes equations.