Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: We propose a novel AutoRegressive Generation-based paradigm for image
Segmentation (ARGenSeg), achieving multimodal understanding and pixel-level
perception within a unified framework. Prior works integrating image
segmentation into multimodal large language models (MLLMs) typically employ
either boundary points representation or dedicated segmentation heads. These
methods rely on discrete representations or semantic prompts fed into
task-specific decoders, which limits the ability of the MLLM to capture
fine-grained visual details. To address these challenges, we introduce a
segmentation framework for MLLM based on image generation, which naturally
produces dense masks for target objects. We leverage MLLM to output visual
tokens and detokenize them into images using an universal VQ-VAE, making the
segmentation fully dependent on the pixel-level understanding of the MLLM. To
reduce inference latency, we employ a next-scale-prediction strategy to
generate required visual tokens in parallel. Extensive experiments demonstrate
that our method surpasses prior state-of-the-art approaches on multiple
segmentation datasets with a remarkable boost in inference speed, while
maintaining strong understanding capabilities.
Authors (7)
Xiaolong Wang
Lixiang Ru
Ziyuan Huang
Kaixiang Ji
Dandan Zheng
Jingdong Chen
+1 more
Submitted
October 23, 2025
Key Contributions
ARGenSeg introduces a novel autoregressive image generation paradigm for image segmentation within MLLMs, enabling pixel-level perception and dense mask generation. It overcomes limitations of discrete representations or task-specific heads by leveraging the MLLM's visual token understanding and detokenizing them into images.
Business Value
Enables more precise and context-aware image segmentation, improving applications in medical diagnostics, autonomous navigation, and content analysis.