Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cl 95% Match Research Paper AI Researchers,ML Engineers,NLP Practitioners,AI Ethicists 1 day ago

Measuring Chain of Thought Faithfulness by Unlearning Reasoning Steps

large-language-models β€Ί reasoning
πŸ“„ Abstract

Abstract: When prompted to think step-by-step, language models (LMs) produce a chain of thought (CoT), a sequence of reasoning steps that the model supposedly used to produce its prediction. Despite much work on CoT prompting, it is unclear if reasoning verbalized in a CoT is faithful to the models' parametric beliefs. We introduce a framework for measuring parametric faithfulness of generated reasoning, and propose Faithfulness by Unlearning Reasoning steps (FUR), an instance of this framework. FUR erases information contained in reasoning steps from model parameters, and measures faithfulness as the resulting effect on the model's prediction. Our experiments with four LMs and five multi-hop multi-choice question answering (MCQA) datasets show that FUR is frequently able to precisely change the underlying models' prediction for a given instance by unlearning key steps, indicating when a CoT is parametrically faithful. Further analysis shows that CoTs generated by models post-unlearning support different answers, hinting at a deeper effect of unlearning.
Authors (4)
Martin Tutek
Fateme Hashemi Chaleshtori
Ana Marasović
Yonatan Belinkov
Submitted
February 20, 2025
arXiv Category
cs.CL
arXiv PDF

Key Contributions

This paper introduces a novel framework (FUR) for measuring the parametric faithfulness of reasoning steps generated by language models. By unlearning information from model parameters corresponding to specific reasoning steps, the framework assesses how predictions change, thereby indicating whether the verbalized CoT is aligned with the model's internal beliefs. This work is crucial for understanding and trusting the reasoning processes of LLMs.

Business Value

Enhances trust and reliability in LLM-generated reasoning, which is critical for applications in sensitive domains like legal, medical, or financial advice where the reasoning process must be sound and verifiable.