Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cl 95% Match Research Paper LLM Researchers,Machine Learning Engineers,AI Infrastructure Developers 2 weeks ago

Adamas: Hadamard Sparse Attention for Efficient Long-Context Inference

large-language-models › model-architecture
📄 Abstract

Abstract: Large language models (LLMs) now support context windows of hundreds of thousands to millions of tokens, enabling applications such as long-document summarization, large-scale code synthesis, multi-document question answering and persistent multi-turn dialogue. However, such extended contexts exacerbate the quadratic cost of self-attention, leading to severe latency in autoregressive decoding. Existing sparse attention methods alleviate these costs but rely on heuristic patterns that struggle to recall critical key-value (KV) pairs for each query, resulting in accuracy degradation. We introduce Adamas, a lightweight yet highly accurate sparse attention mechanism designed for long-context inference. Adamas applies the Hadamard transform, bucketization and 2-bit compression to produce compact representations, and leverages Manhattan-distance estimation for efficient top-k selections. Experiments show that Adamas matches the accuracy of full attention with only a 64-token budget, achieves near-lossless performance at 128, and supports up to 8x higher sparsity than prior state-of-the-art (SOTA) methods while delivering up to 4.4x self-attention and 1.5x end-to-end speedups on 32K-length sequences. Remarkably, Adamas attains comparable or even lower perplexity than full attention, underscoring its effectiveness in maintaining accuracy under aggressive sparsity.
Authors (7)
Siyuan Yan
Guo-Qing Jiang
Yuchen Zhang
Xiaoxing Ma
Ran Zhu
Chun Cao
+1 more
Submitted
October 21, 2025
arXiv Category
cs.CL
arXiv PDF

Key Contributions

Adamas is a novel, lightweight, and accurate sparse attention mechanism designed for efficient long-context LLM inference. It uses Hadamard transform, bucketization, and 2-bit compression to create compact representations and Manhattan-distance estimation for efficient top-k selection, achieving full attention accuracy with significantly reduced computational cost (e.g., 64-token budget).

Business Value

Enables faster and more cost-effective deployment of LLMs for tasks requiring long context, such as summarizing lengthy documents, analyzing large codebases, or maintaining extended dialogues, making these applications more practical and accessible.