Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Reinforcement learning from verifiable rewards has emerged as a powerful
technique for enhancing the complex reasoning abilities of Large Language
Models (LLMs). However, these methods are fundamentally constrained by the
''learning cliff'' phenomenon: when faced with problems far beyond their
current capabilities, models consistently fail, yielding a persistent
zero-reward signal. In policy optimization algorithms like GRPO, this collapses
the advantage calculation to zero, rendering these difficult problems invisible
to the learning gradient and stalling progress. To overcome this, we introduce
Scaf-GRPO (Scaffolded Group Relative Policy Optimization), a progressive
training framework that strategically provides minimal guidance only when a
model's independent learning has plateaued. The framework first diagnoses
learning stagnation and then intervenes by injecting tiered in-prompt hints,
ranging from abstract concepts to concrete steps, enabling the model to
construct a valid solution by itself. Extensive experiments on challenging
mathematics benchmarks demonstrate Scaf-GRPO's effectiveness, boosting the
pass@1 score of the Qwen2.5-Math-7B model on the AIME24 benchmark by a relative
44.3% over a vanilla GRPO baseline. This result demonstrates our framework
provides a robust and effective methodology for unlocking a model's ability to
solve problems previously beyond its reach, a critical step towards extending
the frontier of autonomous reasoning in LLM.
Authors (7)
Xichen Zhang
Sitong Wu
Yinghao Zhu
Haoru Tan
Shaozuo Yu
Ziyi He
+1 more
Submitted
October 22, 2025
Key Contributions
Scaf-GRPO is a progressive training framework designed to overcome the 'learning cliff' in LLM reinforcement learning by strategically providing minimal, tiered guidance only when learning stagnates. This approach diagnoses learning plateaus and intervenes with hints, enabling models to tackle problems beyond their current capabilities and improving gradient signals.
Business Value
Allows for the development of more capable and robust LLMs that can handle complex reasoning tasks, leading to more sophisticated AI applications in areas like scientific discovery, complex problem-solving, and advanced content generation.