Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cv 95% Match Research Paper Computer Vision Researchers,AI Engineers,Developers of systems requiring dense visual understanding 1 week ago

Unleashing Diffusion Transformers for Visual Correspondence by Modulating Massive Activations

computer-vision › diffusion-models
📄 Abstract

Abstract: Pre-trained stable diffusion models (SD) have shown great advances in visual correspondence. In this paper, we investigate the capabilities of Diffusion Transformers (DiTs) for accurate dense correspondence. Distinct from SD, DiTs exhibit a critical phenomenon in which very few feature activations exhibit significantly larger values than others, known as \textit{massive activations}, leading to uninformative representations and significant performance degradation for DiTs. The massive activations consistently concentrate at very few fixed dimensions across all image patch tokens, holding little local information. We trace these dimension-concentrated massive activations and find that such concentration can be effectively localized by the zero-initialized Adaptive Layer Norm (AdaLN-zero). Building on these findings, we propose Diffusion Transformer Feature (DiTF), a training-free framework designed to extract semantic-discriminative features from DiTs. Specifically, DiTF employs AdaLN to adaptively localize and normalize massive activations with channel-wise modulation. In addition, we develop a channel discard strategy to further eliminate the negative impacts from massive activations. Experimental results demonstrate that our DiTF outperforms both DINO and SD-based models and establishes a new state-of-the-art performance for DiTs in different visual correspondence tasks (\eg, with +9.4\% on Spair-71k and +4.4\% on AP-10K-C.S.).
Authors (7)
Chaofan Gan
Yuanpeng Tu
Xi Chen
Tieyuan Chen
Yuxi Li
Mehrtash Harandi
+1 more
Submitted
May 24, 2025
arXiv Category
cs.CV
arXiv PDF

Key Contributions

Investigates the 'massive activations' phenomenon in Diffusion Transformers (DiTs) that degrades performance for tasks like visual correspondence. Proposes DiTF, a training-free framework that effectively localizes and modulates these activations using AdaLN-zero to extract more discriminative features.

Business Value

Enables more accurate and robust visual understanding for applications like robotics, AR/VR, and image editing by extracting better features from powerful DiT models without requiring additional training.