Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Recent advances in reinforcement learning (RL) have strengthened the
reasoning capabilities of vision-language models (VLMs). However, enhancing
policy exploration to better scale test-time compute remains largely
underexplored. In addition, VLMs continue to struggle with imperfect visual
perception, which in turn affects the subsequent reasoning process. We
introduce NoisyRollout, a simple yet effective data augmentation method that
addresses these issues by mixing training trajectories from both clean and
moderately distorted images. This approach injects perceptual diversity,
encouraging better policy exploration and leading to more robust reasoning. A
noise annealing schedule gradually reduces distortion strength, aiding
exploration early in training while ensuring later stability. Crucially, our
method is easy-to-adopt--requiring no additional training cost and no
modifications to the RL objective. Extensive experiments on 2 distinct training
datasets demonstrate that NoisyRollout achieves state-of-the-art performance
among open-source RL-tuned models across 5 out-of-domain reasoning and
perception benchmarks. Furthermore, we validate the effectiveness of
NoisyRollout across model sizes (7B and 32B), data scales (from 1K to 6K) and
image augmentation types (Gaussion noise and rotation), highlighting its
generalizability and scalability.
Authors (8)
Xiangyan Liu
Jinjie Ni
Zijian Wu
Chao Du
Longxu Dou
Haonan Wang
+2 more
Key Contributions
Introduces NoisyRollout, a simple and effective data augmentation method for reinforcement learning that mixes training trajectories from clean and distorted images. This approach enhances policy exploration, improves robustness, and leads to better reasoning without additional training cost or modification to the RL objective.
Business Value
Leads to more capable and reliable AI agents for tasks requiring interaction with the physical world or complex visual environments, such as robotics and autonomous systems, reducing errors caused by perceptual ambiguities.