Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cl 95% Match Research Paper AI Safety Researchers,ML Interpretability Experts,LLM Developers,AI Ethicists 1 week ago

Mapping Faithful Reasoning in Language Models

large-language-models β€Ί reasoning
πŸ“„ Abstract

Abstract: Chain-of-thought (CoT) traces promise transparency for reasoning language models, but prior work shows they are not always faithful reflections of internal computation. This raises challenges for oversight: practitioners may misinterpret decorative reasoning as genuine. We introduce Concept Walk, a general framework for tracing how a model's internal stance evolves with respect to a concept direction during reasoning. Unlike surface text, Concept Walk operates in activation space, projecting each reasoning step onto the concept direction learned from contrastive data. This allows us to observe whether reasoning traces shape outcomes or are discarded. As a case study, we apply Concept Walk to the domain of Safety using Qwen 3-4B. We find that in 'easy' cases, perturbed CoTs are quickly ignored, indicating decorative reasoning, whereas in 'hard' cases, perturbations induce sustained shifts in internal activations, consistent with faithful reasoning. The contribution is methodological: Concept Walk provides a lens to re-examine faithfulness through concept-specific internal dynamics, helping identify when reasoning traces can be trusted and when they risk misleading practitioners.
Authors (5)
Jiazheng Li
Andreas Damianou
J Rosser
JosΓ© Luis Redondo GarcΓ­a
Konstantina Palla
Submitted
October 25, 2025
arXiv Category
cs.LG
arXiv PDF

Key Contributions

This paper introduces Concept Walk, a general framework for tracing how a language model's internal stance evolves with respect to a concept direction during reasoning. Operating in activation space, it allows for the assessment of reasoning faithfulness, distinguishing between genuine computational shifts and decorative traces, particularly useful for oversight and AI safety.

Business Value

Enhances trust and safety in LLMs by providing tools to verify their reasoning processes, crucial for high-stakes applications and regulatory compliance.