Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ai 90% Match Research Paper NLP Researchers,ML Engineers,AI Researchers working on generative models 2 weeks ago

Attention Sinks in Diffusion Language Models

large-language-models › model-architecture
📄 Abstract

Abstract: Masked Diffusion Language Models (DLMs) have recently emerged as a promising alternative to traditional Autoregressive Models (ARMs). DLMs employ transformer encoders with bidirectional attention, enabling parallel token generation while maintaining competitive performance. Although their efficiency and effectiveness have been extensively studied, the internal mechanisms that govern DLMs remain largely unexplored. In this work, we conduct an empirical analysis of DLM attention patterns, focusing on the attention sinking phenomenon, an effect previously observed in various transformer-based architectures. Our findings reveal that DLMs also exhibit attention sinks, but with distinct characteristics. First, unlike in ARMs, the sink positions in DLMs tend to shift throughout the generation process, displaying a dynamic behaviour. Second, while ARMs are highly sensitive to the removal of attention sinks, DLMs remain robust: masking sinks leads to only a minor degradation in performance. These results provide new insights into the inner workings of diffusion-based language models and highlight fundamental differences in how they allocate and utilize attention compared to autoregressive models.
Authors (6)
Maximo Eduardo Rulli
Simone Petruzzi
Edoardo Michielon
Fabrizio Silvestri
Simone Scardapane
Alessio Devoto
Submitted
October 17, 2025
arXiv Category
cs.CL
arXiv PDF

Key Contributions

Provides an empirical analysis of attention patterns in Masked Diffusion Language Models (DLMs), revealing distinct characteristics of 'attention sinks' compared to Autoregressive Models (ARMs). It shows DLM sinks are dynamic and masking them causes only minor performance degradation, indicating robustness.

Business Value

Improves understanding of diffusion-based language models, potentially leading to more efficient and robust text generation systems for various NLP applications.