Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ml 95% Match Research Paper VLM Researchers,AI Alignment Researchers,Multimodal AI Developers,Robotics Engineers 1 week ago

Sherlock: Self-Correcting Reasoning in Vision-Language Models

large-language-models › reasoning
📄 Abstract

Abstract: Reasoning Vision-Language Models (VLMs) have shown promising performance on complex multimodal tasks. However, they still face significant challenges: they are highly sensitive to reasoning errors, require large volumes of annotated data or accurate verifiers, and struggle to generalize beyond specific domains. To address these limitations, we explore self-correction as a strategy to enhance reasoning VLMs. We first conduct an in-depth analysis of reasoning VLMs' self-correction abilities and identify key gaps. Based on our findings, we introduce Sherlock, a self-correction and self-improvement training framework. Sherlock introduces a trajectory-level self-correction objective, a preference data construction method based on visual perturbation, and a dynamic $\beta$ for preference tuning. Once the model acquires self-correction capabilities using only 20k randomly sampled annotated data, it continues to self-improve without external supervision. Built on the Llama3.2-Vision-11B model, Sherlock achieves remarkable results across eight benchmarks, reaching an average accuracy of 64.1 with direct generation and 65.4 after self-correction. It outperforms LLaVA-CoT (63.2), Mulberry (63.9), and LlamaV-o1 (63.4) while using less than 20% of the annotated data.
Authors (2)
Yi Ding
Ruqi Zhang
Submitted
May 28, 2025
arXiv Category
cs.CV
arXiv PDF

Key Contributions

Introduces Sherlock, a self-correction and self-improvement training framework for reasoning Vision-Language Models (VLMs). It features a trajectory-level self-correction objective and a preference data construction method using visual perturbation, enabling models to improve their reasoning without external supervision after initial training.

Business Value

Leads to more reliable and adaptable multimodal AI systems, capable of performing complex reasoning tasks in diverse environments, reducing development costs associated with data annotation.