Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ai 85% Match Research Paper ML theorists,AI researchers,Deep learning engineers,Researchers in computational learning theory 2 weeks ago

Learning Linear Attention in Polynomial Time

large-language-models β€Ί model-architecture
πŸ“„ Abstract

Abstract: Previous research has explored the computational expressivity of Transformer models in simulating Boolean circuits or Turing machines. However, the learnability of these simulators from observational data has remained an open question. Our study addresses this gap by providing the first polynomial-time learnability results (specifically strong, agnostic PAC learning) for single-layer Transformers with linear attention. We show that linear attention may be viewed as a linear predictor in a suitably defined RKHS. As a consequence, the problem of learning any linear transformer may be converted into the problem of learning an ordinary linear predictor in an expanded feature space, and any such predictor may be converted back into a multiheaded linear transformer. Moving to generalization, we show how to efficiently identify training datasets for which every empirical risk minimizer is equivalent (up to trivial symmetries) to the linear Transformer that generated the data, thereby guaranteeing the learned model will correctly generalize across all inputs. Finally, we provide examples of computations expressible via linear attention and therefore polynomial-time learnable, including associative memories, finite automata, and a class of Universal Turing Machine (UTMs) with polynomially bounded computation histories. We empirically validate our theoretical findings on three tasks: learning random linear attention networks, key--value associations, and learning to execute finite automata. Our findings bridge a critical gap between theoretical expressivity and learnability of Transformers, and show that flexible and general models of computation are efficiently learnable.
Authors (6)
Morris Yau
Ekin AkyΓΌrek
Jiayuan Mao
Joshua B. Tenenbaum
Stefanie Jegelka
Jacob Andreas
Submitted
October 14, 2024
arXiv Category
cs.LG
arXiv PDF

Key Contributions

This paper provides the first polynomial-time learnability results (strong, agnostic PAC learning) for single-layer Transformers with linear attention. It demonstrates that linear attention can be viewed as a linear predictor in RKHS, enabling efficient learning and generalization guarantees.

Business Value

Enables the development of more efficient and theoretically sound Transformer models, potentially leading to faster training times and more reliable performance in various AI applications.