Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ml 95% Match Research Paper Theoretical ML Researchers,Deep Learning Architects,Computer Scientists 2 weeks ago

A unified framework for establishing the universal approximation of transformer-type architectures

large-language-models › model-architecture
📄 Abstract

Abstract: We investigate the universal approximation property (UAP) of transformer-type architectures, providing a unified theoretical framework that extends prior results on residual networks to models incorporating attention mechanisms. Our work identifies token distinguishability as a fundamental requirement for UAP and introduces a general sufficient condition that applies to a broad class of architectures. Leveraging an analyticity assumption on the attention layer, we can significantly simplify the verification of this condition, providing a non-constructive approach in establishing UAP for such architectures. We demonstrate the applicability of our framework by proving UAP for transformers with various attention mechanisms, including kernel-based and sparse attention mechanisms. The corollaries of our results either generalize prior works or establish UAP for architectures not previously covered. Furthermore, our framework offers a principled foundation for designing novel transformer architectures with inherent UAP guarantees, including those with specific functional symmetries. We propose examples to illustrate these insights.
Authors (4)
Jingpu Cheng
Ting Lin
Zuowei Shen
Qianxiao Li
Submitted
June 30, 2025
arXiv Category
cs.LG
arXiv PDF

Key Contributions

Establishes a unified theoretical framework for proving the Universal Approximation Property (UAP) of transformer-type architectures, identifying token distinguishability as a key requirement. It generalizes prior UAP results and provides a principled foundation for designing novel transformer variants.

Business Value

Provides a foundational understanding of why transformers are powerful, guiding the design of more effective and efficient architectures for various AI tasks, potentially leading to breakthroughs in model performance and capabilities.