Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cl 95% Match Research Paper AI Researchers,RL Practitioners,LLM Developers,AI Safety Researchers 1 week ago

OpenReward: Learning to Reward Long-form Agentic Tasks via Reinforcement Learning

reinforcement-learning › rlhf
📄 Abstract

Abstract: Reward models (RMs) have become essential for aligning large language models (LLMs), serving as scalable proxies for human evaluation in both training and inference. However, existing RMs struggle on knowledge-intensive and long-form tasks, where evaluating correctness requires grounding beyond the model's internal knowledge. This limitation hinders them from reliably discriminating subtle quality differences, especially when external evidence is necessary. To address this, we introduce OpenRM, a tool-augmented long-form reward model that systematically judges open-ended responses by invoking external tools to gather relevant evidence. We train OpenRM with Group Relative Policy Optimization (GRPO) on over 27K synthesized pairwise examples generated through a controllable data synthesis framework. The training objective jointly supervises intermediate tool usage and final outcome accuracy, incentivizing our reward model to learn effective evidence-based judgment strategies. Extensive experiments on three newly-collected datasets and two widely-used benchmarks demonstrate that OpenRM substantially outperforms existing reward modeling approaches. As a further step, we integrate OpenRM into both inference-time response selection and training-time data selection. This yields consistent gains in downstream LLM alignment tasks, highlighting the potential of tool-augmented reward models for scaling reliable long-form evaluation.
Authors (8)
Ziyou Hu
Zhengliang Shi
Minghang Zhu
Haitao Li
Teng Sun
Pengjie Ren
+2 more
Submitted
October 28, 2025
arXiv Category
cs.CL
arXiv PDF

Key Contributions

Introduces OpenRM, a tool-augmented reward model for aligning LLMs on long-form, knowledge-intensive tasks. It uses external tools to gather evidence and trains with GRPO on synthesized data, jointly supervising tool usage and outcome accuracy for effective evidence-based judgment.

Business Value

Enables more reliable alignment of LLMs for complex, real-world tasks requiring external knowledge, leading to safer and more capable AI systems.