Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cl 94% Match Research Paper AI Researchers,ML Engineers,Computer Vision Scientists,NLP Researchers,AI Ethicists 1 day ago

A Closer Look at Bias and Chain-of-Thought Faithfulness of Large (Vision) Language Models

large-language-models › reasoning
📄 Abstract

Abstract: Chain-of-thought (CoT) reasoning enhances performance of large language models, but questions remain about whether these reasoning traces faithfully reflect the internal processes of the model. We present the first comprehensive study of CoT faithfulness in large vision-language models (LVLMs), investigating how both text-based and previously unexplored image-based biases affect reasoning and bias articulation. Our work introduces a novel, fine-grained evaluation pipeline for categorizing bias articulation patterns, enabling significantly more precise analysis of CoT reasoning than previous methods. This framework reveals critical distinctions in how models process and respond to different types of biases, providing new insights into LVLM CoT faithfulness. Our findings reveal that subtle image-based biases are rarely articulated compared to explicit text-based ones, even in models specialized for reasoning. Additionally, many models exhibit a previously unidentified phenomenon we term ``inconsistent'' reasoning - correctly reasoning before abruptly changing answers, serving as a potential canary for detecting biased reasoning from unfaithful CoTs. We then apply the same evaluation pipeline to revisit CoT faithfulness in LLMs across various levels of implicit cues. Our findings reveal that current language-only reasoning models continue to struggle with articulating cues that are not overtly stated.
Authors (3)
Sriram Balasubramanian
Samyadeep Basu
Soheil Feizi
Submitted
May 29, 2025
arXiv Category
cs.CL
arXiv PDF

Key Contributions

This paper presents the first comprehensive study of Chain-of-Thought (CoT) faithfulness in Large Vision-Language Models (LVLMs). It introduces a novel evaluation pipeline to categorize bias articulation patterns, revealing that image-based biases are rarely articulated compared to text-based ones, and identifying a new phenomenon in model reasoning.

Business Value

Improves the reliability and trustworthiness of multimodal AI systems by providing deeper insights into their reasoning processes and biases. This is crucial for applications where accurate and unbiased visual understanding is required.