Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ai 95% Match Research Paper Researchers in multi-modal AI,LLM developers,AI engineers working on generative models,AI safety researchers 2 weeks ago

PruneHal: Reducing Hallucinations in Multi-modal Large Language Models through Adaptive KV Cache Pruning

large-language-models › multimodal-llms
📄 Abstract

Abstract: While multi-modal large language models (MLLMs) have made significant progress in recent years, the issue of hallucinations remains a major challenge. To mitigate this phenomenon, existing solutions either introduce additional data for further training or incorporate external or internal information during inference. However, these approaches inevitably introduce extra computational costs. In this paper, we observe that hallucinations in MLLMs are strongly associated with insufficient attention allocated to visual tokens. In particular, the presence of redundant visual tokens disperses the model's attention, preventing it from focusing on the most informative ones. As a result, critical visual cues are often under-attended, which in turn exacerbates the occurrence of hallucinations. Building on this observation, we propose \textbf{PruneHal}, a training-free, simple yet effective method that leverages adaptive KV cache pruning to enhance the model's focus on critical visual information, thereby mitigating hallucinations. To the best of our knowledge, we are the first to apply token pruning for hallucination mitigation in MLLMs. Notably, our method don't require additional training and incurs nearly no extra inference cost. Moreover, PruneHal is model-agnostic and can be seamlessly integrated with different decoding strategies, including those specifically designed for hallucination mitigation. We evaluate PruneHal on several widely used hallucination evaluation benchmarks using four mainstream MLLMs, achieving robust and outstanding results that highlight the effectiveness and superiority of our method. Our code will be publicly available.
Authors (8)
Fengyuan Sun
Hui Chen
Xinhao Xu
Dandan Zheng
Jingdong Chen
Jun Zhou
+2 more
Submitted
October 22, 2025
arXiv Category
cs.CV
arXiv PDF

Key Contributions

Proposes PruneHal, a training-free method that uses adaptive KV cache pruning to reduce hallucinations in Multi-modal Large Language Models (MLLMs). It addresses the issue of attention dispersion caused by redundant visual tokens, which leads to critical visual cues being under-attended. This method enhances the model's focus on informative visual inputs without requiring additional training or external information during inference.

Business Value

Improves the trustworthiness and accuracy of MLLMs, making them more suitable for real-world applications where factual correctness is essential, such as content creation, image analysis, and assistive technologies.