Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Bad charactors when submitting to arXiv: Attention is a fundamental building
block of large language models (LLMs), so there have been many efforts to
implement it efficiently. For example, FlashAttention leverages tiling and
kernel fusion to optimize attention. Recently, a number of variants of
attention have been introduced to enhance model quality or efficiency.
Supporting them efficiently remains difficult since they usually require
specialized kernels or hand-tuned implementations. FlexAttention recently
addressed part of this gap by using static programming templates to support
FlashAttention-like kernels for a subset of attention variants.
In this paper, we introduce Flashlight, a compiler-native framework within
the PyTorch ecosystem that automatically generates fused, FlashAttention-style
kernels for arbitrary attention-based programs, without relying on static
templates or predefined kernel specializations. Flashlight leverages PyTorch's
compilation workflow to fuse and tile attention computations transparently,
enabling efficient execution for diverse attention patterns. Not only does it
support all variants expressible in the FlexAttention model but it also handles
more general, data-dependent attention formulations that are beyond the
capabilities of FlexAttention.
Our results show that Flashlight produces kernels with competitive or
superior performance to FlexAttention, while offering the flexibility of native
PyTorch code, enabling developers to rapidly explore new attention models
without sacrificing performance.
Key Contributions
Introduces Flashlight, a compiler-native framework within PyTorch that automatically generates fused, FlashAttention-style kernels for arbitrary attention-based programs. Unlike previous methods relying on static templates, Flashlight leverages PyTorch's compilation workflow for dynamic generation, significantly accelerating attention variants without manual kernel specialization.
Business Value
Enables faster training and inference of LLMs and other attention-based models, reducing computational costs and enabling the development of larger, more complex models.