Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cv 90% Match Research Paper AI researchers,ML engineers,Deep learning architects 2 weeks ago

Polyline Path Masked Attention for Vision Transformer

large-language-models › model-architecture
📄 Abstract

Abstract: Global dependency modeling and spatial position modeling are two core issues of the foundational architecture design in current deep learning frameworks. Recently, Vision Transformers (ViTs) have achieved remarkable success in computer vision, leveraging the powerful global dependency modeling capability of the self-attention mechanism. Furthermore, Mamba2 has demonstrated its significant potential in natural language processing tasks by explicitly modeling the spatial adjacency prior through the structured mask. In this paper, we propose Polyline Path Masked Attention (PPMA) that integrates the self-attention mechanism of ViTs with an enhanced structured mask of Mamba2, harnessing the complementary strengths of both architectures. Specifically, we first ameliorate the traditional structured mask of Mamba2 by introducing a 2D polyline path scanning strategy and derive its corresponding structured mask, polyline path mask, which better preserves the adjacency relationships among image tokens. Notably, we conduct a thorough theoretical analysis on the structural characteristics of the proposed polyline path mask and design an efficient algorithm for the computation of the polyline path mask. Next, we embed the polyline path mask into the self-attention mechanism of ViTs, enabling explicit modeling of spatial adjacency prior. Extensive experiments on standard benchmarks, including image classification, object detection, and segmentation, demonstrate that our model outperforms previous state-of-the-art approaches based on both state-space models and Transformers. For example, our proposed PPMA-T/S/B models achieve 48.7%/51.1%/52.3% mIoU on the ADE20K semantic segmentation task, surpassing RMT-T/S/B by 0.7%/1.3%/0.3%, respectively. Code is available at https://github.com/zhongchenzhao/PPMA.
Authors (6)
Zhongchen Zhao
Chaodong Xiao
Hui Lin
Qi Xie
Lei Zhang
Deyu Meng
Submitted
June 19, 2025
arXiv Category
cs.CV
arXiv PDF

Key Contributions

Proposes Polyline Path Masked Attention (PPMA) for Vision Transformers, integrating self-attention with an enhanced structured mask inspired by Mamba2. Introduces a 2D polyline path scanning strategy to better preserve spatial adjacency relationships.

Business Value

Could lead to more efficient and powerful vision models, reducing computational requirements for tasks like image recognition and video analysis.