Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: The reasoning capabilities of large language models (LLMs) have advanced
rapidly, particularly following the release of DeepSeek R1, which has inspired
a surge of research into data quality and reinforcement learning (RL)
algorithms. Despite the pivotal role diversity plays in RL, its influence on
LLM reasoning remains largely underexplored. To bridge this gap, this work
presents a systematic investigation into the impact of diversity in RL-based
training for LLM reasoning, and proposes a novel diversity-aware policy
optimization method. Across evaluations on 12 LLMs, we observe a strong
positive correlation between the solution diversity and Potential at k (a novel
metric quantifying an LLM's reasoning potential) in high-performing models.
This finding motivates our method to explicitly promote diversity during RL
training. Specifically, we design a token-level diversity and reformulate it
into a practical objective, then we selectively apply it to positive samples.
Integrated into the R1-zero training framework, our method achieves a 3.5
percent average improvement across four mathematical reasoning benchmarks,
while generating more diverse and robust solutions.
Authors (5)
Jian Yao
Ran Cheng
Xingyu Wu
Jibin Wu
Kay Chen Tan
Key Contributions
This paper systematically investigates the impact of diversity on LLM reasoning, proposing a novel diversity-aware policy optimization method. It introduces a token-level diversity metric and a new metric 'Potential at k', demonstrating a strong positive correlation between solution diversity and reasoning potential, and showing how to leverage this for improved LLM reasoning.
Business Value
Enhances the reliability and capability of LLMs for complex reasoning tasks, leading to more robust AI assistants, better content generation, and improved problem-solving tools.