Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: We present LangToMo, a vision-language-action framework structured as a
dual-system architecture that uses pixel motion forecasts as intermediate
representations. Our high-level System 2, an image diffusion model, generates
text-conditioned pixel motion sequences from a single frame to guide robot
control. Pixel motion-a universal, interpretable, and motion-centric
representation-can be extracted from videos in a weakly-supervised manner,
enabling diffusion model training on any video-caption data. Treating generated
pixel motion as learned universal representations, our low level System 1
module translates these into robot actions via motion-to-action mapping
functions, which can be either hand-crafted or learned with minimal
supervision. System 2 operates as a high-level policy applied at sparse
temporal intervals, while System 1 acts as a low-level policy at dense temporal
intervals. This hierarchical decoupling enables flexible, scalable, and
generalizable robot control under both unsupervised and supervised settings,
bridging the gap between language, motion, and action. Checkout
https://kahnchana.github.io/LangToMo