Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ai 95% Match Research Paper AI Alignment Researchers,RL Researchers,LLM Developers,ML Engineers 2 weeks ago

$Q\sharp$: Provably Optimal Distributional RL for LLM Post-Training

large-language-models › alignment
📄 Abstract

Abstract: Reinforcement learning (RL) post-training is crucial for LLM alignment and reasoning, but existing policy-based methods, such as PPO and DPO, can fall short of fixing shortcuts inherited from pre-training. In this work, we introduce $Q\sharp$, a value-based algorithm for KL-regularized RL that guides the reference policy using the optimal regularized $Q$ function. We propose to learn the optimal $Q$ function using distributional RL on an aggregated online dataset. Unlike prior value-based baselines that guide the model using unregularized $Q$-values, our method is theoretically principled and provably learns the optimal policy for the KL-regularized RL problem. Empirically, $Q\sharp$ outperforms prior baselines in math reasoning benchmarks while maintaining a smaller KL divergence to the reference policy. Theoretically, we establish a reduction from KL-regularized RL to no-regret online learning, providing the first bounds for deterministic MDPs under only realizability. Thanks to distributional RL, our bounds are also variance-dependent and converge faster when the reference policy has small variance. In sum, our results highlight $Q\sharp$ as an effective approach for post-training LLMs, offering both improved performance and theoretical guarantees. The code can be found at https://github.com/jinpz/q_sharp.
Authors (8)
Jin Peng Zhou
Kaiwen Wang
Jonathan Chang
Zhaolin Gao
Nathan Kallus
Kilian Q. Weinberger
+2 more
Submitted
February 27, 2025
arXiv Category
cs.LG
arXiv PDF

Key Contributions

Introduces $Q\sharp$, a theoretically principled and provably optimal value-based algorithm for KL-regularized RL post-training of LLMs. It learns the optimal regularized Q-function using distributional RL on an aggregated online dataset, outperforming prior baselines in math reasoning benchmarks while maintaining policy stability.

Business Value

Enables more effective and reliable alignment of LLMs, leading to AI systems that are better at complex reasoning tasks and adhere more closely to desired behaviors, crucial for advanced AI applications.