Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Scaling test-time compute has emerged as a key strategy for enhancing the
reasoning capabilities of large language models (LLMs), particularly in tasks
like mathematical problem-solving. A traditional approach, Self-Consistency
(SC), generates multiple solutions to a problem and selects the most common
answer via majority voting. Another common method involves scoring each
solution with a reward model (verifier) and choosing the best one. Recent
advancements in Generative Reward Models (GenRM) reframe verification as a
next-token prediction task, enabling inference-time scaling along a new axis.
Specifically, GenRM generates multiple verification chains-of-thought to score
each solution. Under a limited inference budget, this introduces a fundamental
trade-off: should you spend the budget on scaling solutions via SC or generate
fewer solutions and allocate compute to verification via GenRM? To address
this, we evaluate GenRM against SC under a fixed inference budget.
Interestingly, we find that SC is more compute-efficient than GenRM for most
practical inference budgets across diverse models and datasets. For instance,
GenRM first matches SC after consuming up to 8x the inference compute and
requires significantly more compute to outperform it. Furthermore, we derive
inference scaling laws for the GenRM paradigm, revealing that compute-optimal
inference favors scaling solution generation more aggressively than scaling the
number of verifications. Our work provides practical guidance on optimizing
test-time scaling by balancing solution generation and verification. The code
is available at https://github.com/nishadsinghi/sc-genrm-scaling.
Authors (7)
Nishad Singhi
Hritik Bansal
Arian Hosseini
Aditya Grover
Kai-Wei Chang
Marcus Rohrbach
+1 more
Key Contributions
This paper analyzes the fundamental trade-off between scaling solutions (Self-Consistency) and scaling verification (Generative Reward Models) for LLM reasoning under a fixed inference budget. It provides an evaluation framework to determine when to prioritize solving more problems versus when to invest compute in verifying fewer solutions more thoroughly, aiming to optimize reasoning performance.
Business Value
Helps optimize the deployment of LLMs for complex reasoning tasks by ensuring the most effective use of computational resources, leading to better performance and cost-efficiency in applications requiring high-accuracy reasoning.