Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ai 98% Match Research Paper Robotics researchers,AI developers,Simulation engineers 1 week ago

RobotArena $\infty$: Scalable Robot Benchmarking via Real-to-Sim Translation

robotics › sim-to-real
📄 Abstract

Abstract: The pursuit of robot generalists - instructable agents capable of performing diverse tasks across diverse environments - demands rigorous and scalable evaluation. Yet real-world testing of robot policies remains fundamentally constrained: it is labor-intensive, slow, unsafe at scale, and difficult to reproduce. Existing simulation benchmarks are similarly limited, as they train and test policies within the same synthetic domains and cannot assess models trained from real-world demonstrations or alternative simulation environments. As policies expand in scope and complexity, these barriers only intensify, since defining "success" in robotics often hinges on nuanced human judgments of execution quality. In this paper, we introduce a new benchmarking framework that overcomes these challenges by shifting VLA evaluation into large-scale simulated environments augmented with online human feedback. Leveraging advances in vision-language models, 2D-to-3D generative modeling, and differentiable rendering, our approach automatically converts video demonstrations from widely used robot datasets into simulated counterparts. Within these digital twins, we assess VLA policies using both automated VLM-guided scoring and scalable human preference judgments collected from crowdworkers, transforming human involvement from tedious scene setup, resetting, and safety supervision into lightweight preference comparisons. To measure robustness, we systematically perturb simulated environments along multiple axes, such as textures and object placements, stress-testing policy generalization under controlled variation. The result is a continuously evolving, reproducible, and scalable benchmark for real-world trained robot manipulation policies, addressing a critical missing capability in today's robotics landscape.
Authors (9)
Yash Jangir
Yidi Zhang
Kashu Yamazaki
Chenyu Zhang
Kuan-Hsun Tu
Tsung-Wei Ke
+3 more
Submitted
October 27, 2025
arXiv Category
cs.RO
arXiv PDF

Key Contributions

Introduces RobotArena ∞, a new benchmarking framework that overcomes limitations of real-world and existing simulation benchmarks by shifting VLA evaluation into large-scale simulated environments augmented with online human feedback. This enables scalable, safe, and reproducible evaluation of robot policies.

Business Value

Provides a scalable and reliable platform for evaluating and developing advanced robotic agents, accelerating the development of general-purpose robots for various industries.