Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ai 80% Match Research Paper Reinforcement Learning Researchers,LLM Developers,AI Researchers,Machine Learning Engineers 2 weeks ago

The Road Less Traveled: Enhancing Exploration in LLMs via Sequential Sampling

reinforcement-learning
📄 Abstract

Abstract: Reinforcement learning (RL) has been pivotal in enhancing the reasoning capabilities of large language models (LLMs), but it often suffers from limited exploration and entropy collapse, where models exploit a narrow set of solutions, leading to a loss of sampling diversity and subsequently preventing RL from further improving performance. This issue is exacerbated in parallel sampling methods, where multiple outputs are drawn from the same distribution, potentially causing the model to converge to similar solutions. We propose SESA, a novel SEquential SAmpling framework that mitigates this challenge by generating diverse solution sketches sequentially before expanding them into full reasoning paths. This approach ensures broader exploration by conditioning each new output on previous ones, promoting diversity throughout the process and preventing policy collapse. Our experiments on a synthetic task show that sequential sampling consistently outperforms traditional RL methods in terms of path diversity and recovery from collapse. Further evaluations on real-world tasks demonstrate that SESA improves both the exploration of valid strategies and the overall performance of LLMs. On three agent benchmarks, SESA lifts success rates by $+0.25$, $+0.42$, and $+0.07$ absolute over the base model (up to an additional $211\%$ relative improvement over baseline RL), underscoring its exploration advantage. This work introduces a structured approach to exploration, paving the way for more effective and diverse reasoning in RL-trained LLMs. Our code is released at https://github.com/MuLabPKU/sesa.
Authors (2)
Shijia Kang
Muhan Zhang
Submitted
October 17, 2025
arXiv Category
cs.LG
arXiv PDF

Key Contributions

This paper proposes SESA, a novel Sequential Sampling framework to enhance exploration in RL for LLMs. SESA mitigates entropy collapse and policy collapse by generating diverse solution sketches sequentially, conditioning each new output on previous ones. This approach promotes diversity and prevents convergence to similar solutions, outperforming traditional RL methods in experiments.

Business Value

Improves the ability of LLMs to tackle complex reasoning tasks by enhancing their exploration capabilities, leading to more robust and creative AI solutions in areas like content generation, scientific discovery, and complex problem-solving.