Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ai 96% Match Research Paper ML Researchers,NLP Engineers,AI Infrastructure Developers 1 week ago

Sparser Block-Sparse Attention via Token Permutation

large-language-models › model-architecture
📄 Abstract

Abstract: Scaling the context length of large language models (LLMs) offers significant benefits but is computationally expensive. This expense stems primarily from the self-attention mechanism, whose $O(N^2)$ complexity with respect to sequence length presents a major bottleneck for both memory and latency. Fortunately, the attention matrix is often sparse, particularly for long sequences, suggesting an opportunity for optimization. Block-sparse attention has emerged as a promising solution that partitions sequences into blocks and skips computation for a subset of these blocks. However, the effectiveness of this method is highly dependent on the underlying attention patterns, which can lead to sub-optimal block-level sparsity. For instance, important key tokens for queries within a single block may be scattered across numerous other blocks, leading to computational redundancy. In this work, we propose Permuted Block-Sparse Attention (\textbf{PBS-Attn}), a plug-and-play method that leverages the permutation properties of attention to increase block-level sparsity and enhance the computational efficiency of LLM prefilling. We conduct comprehensive experiments on challenging real-world long-context datasets, demonstrating that PBS-Attn consistently outperforms existing block-sparse attention methods in model accuracy and closely matches the full attention baseline. Powered by our custom permuted-FlashAttention kernels, PBS-Attn achieves an end-to-end speedup of up to $2.75\times$ in long-context prefilling, confirming its practical viability. Code available at https://github.com/xinghaow99/pbs-attn
Authors (10)
Xinghao Wang
Pengyu Wang
Dong Zhang
Chenkun Tan
Shaojun Zhou
Zhaoxiang Liu
+4 more
Submitted
October 24, 2025
arXiv Category
cs.CL
arXiv PDF

Key Contributions

This paper introduces Permuted Block-Sparse Attention (PBS-Attn), a plug-and-play method that improves block-sparse attention by using token permutation to achieve better block-level sparsity. This addresses the sub-optimal sparsity issue in existing block-sparse methods, leading to more efficient computation and enabling longer context lengths for LLMs.

Business Value

Enabling LLMs to process longer contexts more efficiently can unlock new applications in areas like document analysis, long-form content generation, and complex dialogue systems, reducing operational costs.