Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cv 90% Match Research Paper Robotics Researchers,AI Researchers in Generative Models,Machine Learning Engineers 2 days ago

Dual-Stream Diffusion for World-Model Augmented Vision-Language-Action Model

generative-ai › diffusion
📄 Abstract

Abstract: Recently, augmenting Vision-Language-Action models (VLAs) with world modeling has shown promise in improving robotic policy learning. However, it remains challenging to jointly predict next-state observations and action sequences because of the inherent difference between the two modalities. To address this, we propose DUal-STream diffusion (DUST), a world-model augmented VLA framework that handles the modality conflict and enhances the performance of VLAs across diverse tasks. Specifically, we propose a multimodal diffusion transformer architecture that explicitly maintains separate modality streams while still enabling cross-modal knowledge sharing. In addition, we introduce independent noise perturbations for each modality and a decoupled flow-matching loss. This design enables the model to learn the joint distribution in a bidirectional manner while avoiding the need for a unified latent space. Based on the decoupling of modalities during training, we also introduce a joint sampling method that supports test-time scaling, where action and vision tokens evolve asynchronously at different rates. Through experiments on simulated benchmarks such as RoboCasa and GR-1, DUST achieves up to 6% gains over baseline methods, while our test-time scaling approach provides an additional 2-5% boost. On real-world tasks with the Franka Research 3, DUST improves success rates by 13%, confirming its effectiveness beyond simulation. Furthermore, pre-training on action-free videos from BridgeV2 yields significant transfer gains on RoboCasa, underscoring DUST's potential for large-scale VLA pretraining.
Authors (5)
John Won
Kyungmin Lee
Huiwon Jang
Dongyoung Kim
Jinwoo Shin
Submitted
October 31, 2025
arXiv Category
cs.CV
arXiv PDF

Key Contributions

DUST is a novel world-model augmented VLA framework that uses a dual-stream diffusion architecture to jointly predict next-state observations and action sequences. It addresses modality conflicts by maintaining separate streams with independent noise perturbations and a decoupled flow-matching loss, enabling bidirectional learning without a unified latent space.

Business Value

Enables more capable and adaptable robots that can better understand and interact with their environment, leading to advancements in automation and human-robot collaboration.