Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
π Abstract
Abstract: Reasoning models (RMs), language models (LMs) trained with reinforcement
learning to produce long-form natural language reasoning, have been remarkably
successful, but they still require large amounts of computation and data to
train, and can be slow and expensive to run. In this paper, we show that
standard instruct LMs can already be elicited to be strong reasoners at a level
comparable to or even surpassing their corresponding RMs (e.g., DeepSeek V3 vs
R1) without finetuning, across diverse domains from instruction following and
creative generation to mathematical reasoning. This is achieved by CodeAdapt,
our simple recipe that combines the CodeAct framework, where LMs interleave
natural language reasoning with code execution in a multi-step fashion, with
few-shot bootstrap in-context learning from as few as five training problems.
Analyzing four matched pairs of LMs and RMs, we find that CodeAdapt enables
three LMs to outperform the corresponding RMs on average over eight tasks (up
to 22.9%) while being 10-81% more token efficient, and delivers superior
performance on six tasks when averaged over the four models (up to 35.7%).
Furthermore, the code-augmented reasoning traces display rich and varied
problem-solving strategies. Our findings support that (1) CodeAdapt-style
learning and reasoning may be robust and domain general and (2) code-enabled
LMs are cognitively grounded and powerful systems, potentially providing a
strong foundation for in-weight reinforcement learning.
Authors (5)
Cedegao E. Zhang
CΓ©dric Colas
Gabriel Poesia
Joshua B. Tenenbaum
Jacob Andreas
Submitted
October 23, 2025
Key Contributions
This paper demonstrates that standard instruct LLMs, when combined with the CodeAdapt framework and few-shot learning, can achieve reasoning capabilities comparable to or exceeding dedicated reasoning models without requiring further fine-tuning. This approach significantly reduces the computational cost and data requirements for achieving strong reasoning performance.
Business Value
Enables more efficient and cost-effective deployment of powerful reasoning capabilities in LLM-based applications, potentially reducing infrastructure costs and improving user experience for tasks requiring complex reasoning.