Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cv 93% Match Technical Report / Research Paper Robotics Researchers,AI Engineers,Control Systems Engineers,Embodied AI Researchers 20 hours ago

iFlyBot-VLA Technical Report

robotics › manipulation
📄 Abstract

Abstract: We introduce iFlyBot-VLA, a large-scale Vision-Language-Action (VLA) model trained under a novel framework. The main contributions are listed as follows: (1) a latent action model thoroughly trained on large-scale human and robotic manipulation videos; (2) a dual-level action representation framework that jointly supervises both the Vision-Language Model (VLM) and the action expert during training; (3) a mixed training strategy that combines robot trajectory data with general QA and spatial QA datasets, effectively enhancing the 3D perceptual and reasoning capabilities of the VLM backbone. Specifically, the VLM is trained to predict two complementary forms of actions: latent actions, derived from our latent action model pretrained on cross-embodiment manipulation data, which capture implicit high-level intentions; and structured discrete action tokens, obtained through frequency-domain transformations of continuous control signals, which encode explicit low-level dynamics. This dual supervision aligns the representation spaces of language, vision, and action, enabling the VLM to directly contribute to action generation. Experimental results on the LIBERO Franka benchmark demonstrate the superiority of our frame-work, while real-world evaluations further show that iFlyBot-VLA achieves competitive success rates across diverse and challenging manipulation tasks. Furthermore, we plan to open-source a portion of our self-constructed dataset to support future research in the community

Key Contributions

Introduces iFlyBot-VLA, a large-scale Vision-Language-Action (VLA) model trained with a novel framework. Key contributions include a latent action model trained on manipulation videos, a dual-level action representation for joint VLM and action expert supervision, and a mixed training strategy combining robot data with QA datasets to enhance 3D perception and reasoning.

Business Value

Enables more intelligent and adaptable robots for tasks requiring manipulation and interaction, potentially revolutionizing manufacturing, logistics, and domestic assistance.