Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Current VLM-based VQA methods often process entire images, leading to
excessive visual tokens that include redundant information irrelevant to the
posed question. This abundance of unnecessary image details creates numerous
visual tokens, drastically increasing memory and computational requirements in
VLMs. To address this, we propose Contextual Region-Oriented Visual Token
Pruning (CROP), a novel framework to compress visual tokens through a two-step
process: Localization and Pruning. Specifically, CROP first employs an
efficient model to identify the contextual region relevant to the input query.
Subsequently, two distinct strategies are introduced for pruning: (1) Pre-LLM
Compression (PLC), which adaptively compresses different image regions with
varying ratios, and (2) Inner-LLM Pruning (ILP), a training-free method that
prunes tokens within early LLM layers guided by the identified contextual
region. Extensive experiments on a wide range of VQA tasks demonstrate that
CROP significantly outperforms existing visual token pruning methods and
achieves state-of-the-art performance.