Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ai 95% Match Research Paper Robotics Engineers,AI Researchers,Control Systems Engineers,Automation Specialists 3 weeks ago

RL-100: Performant Robotic Manipulation with Real-World Reinforcement Learning

robotics › manipulation
📄 Abstract

Abstract: Real-world robotic manipulation in homes and factories demands reliability, efficiency, and robustness that approach or surpass skilled human operators. We present RL-100, a real-world reinforcement learning training framework built on diffusion visuomotor policies trained bu supervised learning. RL-100 introduces a three-stage pipeline. First, imitation learning leverages human priors. Second, iterative offline reinforcement learning uses an Offline Policy Evaluation procedure, abbreviated OPE, to gate PPO-style updates that are applied in the denoising process for conservative and reliable improvement. Third, online reinforcement learning eliminates residual failure modes. An additional lightweight consistency distillation head compresses the multi-step sampling process in diffusion into a single-step policy, enabling high-frequency control with an order-of-magnitude reduction in latency while preserving task performance. The framework is task-, embodiment-, and representation-agnostic and supports both 3D point clouds and 2D RGB inputs, a variety of robot platforms, and both single-step and action-chunk policies. We evaluate RL-100 on seven real-robot tasks spanning dynamic rigid-body control, such as Push-T and Agile Bowling, fluids and granular pouring, deformable cloth folding, precise dexterous unscrewing, and multi-stage orange juicing. RL-100 attains 100\% success across evaluated trials for a total of 900 out of 900 episodes, including up to 250 out of 250 consecutive trials on one task. The method achieves near-human teleoperation or better time efficiency and demonstrates multi-hour robustness with uninterrupted operation lasting up to two hours.
Authors (9)
Kun Lei
Huanyu Li
Dongjie Yu
Zhenyu Wei
Lingxiao Guo
Zhennan Jiang
+3 more
Submitted
October 16, 2025
arXiv Category
cs.RO
arXiv PDF Code

Key Contributions

Presents RL-100, a real-world reinforcement learning framework for robotic manipulation using diffusion visuomotor policies. It employs a three-stage pipeline (imitation, offline RL, online RL) and introduces policy distillation for single-step control, achieving high-frequency operation with reduced latency while maintaining performance.

Business Value

Enables the development of more capable and responsive robots for complex tasks in homes and factories, leading to increased automation, efficiency, and new service possibilities.

View Code on GitHub