Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_robotics 92% Match Research Paper AI Researchers,Robotics Engineers,ML Engineers,Computer Vision Researchers 3 weeks ago

RLinf-VLA: A Unified and Efficient Framework for VLA+RL Training

large-language-models › multimodal-llms
📄 Abstract

Abstract: Recent progress in vision and language foundation models has significantly advanced multimodal understanding, reasoning, and generation, inspiring a surge of interest in extending such capabilities to embodied settings through vision-language-action (VLA) models. Yet, most VLA models are still trained with supervised fine-tuning (SFT), which struggles to generalize under distribution shifts due to error accumulation. Reinforcement learning (RL) offers a promising alternative by directly optimizing task performance through interaction, but existing attempts remain fragmented and lack a unified platform for fair and systematic comparison across model architectures and algorithmic designs. To address this gap, we introduce RLinf-VLA, a unified and efficient framework for scalable RL training of VLA models. The system adopts a highly flexible resource allocation design that addresses the challenge of integrating rendering, training, and inference in RL+VLA training. In particular, for GPU-parallelized simulators, RLinf-VLA implements a novel hybrid fine-grained pipeline allocation mode, achieving a 1.61x-1.88x speedup in training. Through a unified interface, RLinf-VLA seamlessly supports diverse VLA architectures (e.g., OpenVLA, OpenVLA-OFT), multiple RL algorithms (e.g., PPO, GRPO), and various simulators (e.g., ManiSkill, LIBERO). In simulation, a unified model achieves 98.11\% across 130 LIBERO tasks and 97.66\% across 25 ManiSkill tasks. Beyond empirical performance, our study distills a set of best practices for applying RL to VLA training and sheds light on emerging patterns in this integration. Furthermore, we present preliminary deployment on a real-world Franka robot, where RL-trained policies exhibit stronger generalization than those trained with SFT. We envision RLinf-VLA as a foundation to accelerate and standardize research on embodied intelligence.

Key Contributions

RLinf-VLA is a unified and efficient framework for scalable RL training of Vision-Language-Action (VLA) models. It addresses the fragmentation in existing RL+VLA research by providing a platform for systematic comparison and introduces a flexible resource allocation design to integrate rendering, training, and inference, aiming to improve generalization beyond SFT.

Business Value

Accelerates the development of more capable embodied AI agents for robotics and virtual environments, leading to more intelligent and adaptable systems.