Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Large Language Models (LLMs) have shown impressive performance on complex
tasks through Chain-of-Thought (CoT) reasoning. However, conventional CoT
relies on explicitly verbalized intermediate steps, which constrains its
broader applicability, particularly in abstract reasoning tasks beyond
language. To address this, there has been growing research interest in
\textit{latent CoT reasoning}, where the reasoning process is embedded within
latent spaces. By decoupling reasoning from explicit language generation,
latent CoT offers the promise of richer cognitive representations and
facilitates more flexible, faster inference. This paper aims to present a
comprehensive overview of this emerging paradigm and establish a systematic
taxonomy. We analyze recent advances in methods, categorizing them from
token-wise horizontal approaches to layer-wise vertical strategies. We then
provide in-depth discussions of these methods, highlighting their design
principles, applications, and remaining challenges. We hope that our survey
provides a structured foundation for advancing this promising direction in LLM
reasoning. The relevant papers will be regularly updated at
https://github.com/EIT-NLP/Awesome-Latent-CoT.
Authors (10)
Xinghao Chen
Anhao Zhao
Heming Xia
Xuan Lu
Hanlin Wang
Yanjun Chen
+4 more
Key Contributions
Presents a comprehensive survey and systematic taxonomy of latent Chain-of-Thought (CoT) reasoning in LLMs. It analyzes recent advances, categorizes methods (token-wise, layer-wise), and discusses design principles, applications, and open problems, highlighting the potential of decoupling reasoning from explicit language generation for richer representations and faster inference.
Business Value
Provides a foundational understanding of advanced reasoning techniques in LLMs, guiding future research and development towards more capable and efficient AI systems for complex problem-solving.