Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: This paper investigates Reinforcement Learning (RL) approaches to enhance the
reasoning capabilities of Large Language Model (LLM) agents in long-horizon,
multi-turn scenarios. Although RL algorithms such as Group Relative Policy
Optimization (GRPO) and Proximal Policy Optimization (PPO) have been widely
applied to train multi-turn LLM agents, they typically rely only on sparse
outcome rewards and lack dense intermediate signals across multiple decision
steps, limiting their performance on complex reasoning tasks. To bridge this
gap, we present the first systematic study of \textit{turn-level reward design}
for multi-turn RL algorithms and agent applications. By integrating turn-level
rewards, we extend GRPO and PPO to their respective multi-turn variants,
enabling fine-grained credit assignment. We conduct case studies on multi-turn
reasoning-augmented search agents, where we carefully design two types of
turn-level rewards: verifiable and LLM-as-judge. Our experiments on multi-turn
search tasks demonstrate that incorporating well-designed turn-level rewards
enables RL algorithms to significantly outperform baseline methods with
trajectory-level rewards. Both training and validation reward curves illustrate
that our method achieves \textit{greater stability}, \textit{faster
convergence}, and \textit{higher accuracy}. Numerical results across diverse
question-answering datasets further show that our approach consistently
delivers highest answer correctness and 100\% format correctness.
Authors (11)
Quan Wei
Siliang Zeng
Chenliang Li
William Brown
Oana Frunza
Wei Deng
+5 more
Key Contributions
This paper systematically studies 'turn-level reward design' for Reinforcement Learning (RL) in multi-turn LLM agents. By integrating turn-level rewards (verifiable and LLM-as-judge), it enhances fine-grained credit assignment, improving LLM reasoning capabilities in complex, long-horizon scenarios.
Business Value
Enables the development of more capable and reliable AI agents that can perform complex reasoning tasks over extended interactions, leading to better conversational AI, advanced search tools, and more sophisticated autonomous systems.