Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Autonomous driving world models are expected to work effectively across three
core dimensions: state, action, and reward. Existing models, however, are
typically restricted to limited state modalities, short video sequences,
imprecise action control, and a lack of reward awareness. In this paper, we
introduce OmniNWM, an omniscient panoramic navigation world model that
addresses all three dimensions within a unified framework. For state, OmniNWM
jointly generates panoramic videos of RGB, semantics, metric depth, and 3D
occupancy. A flexible forcing strategy enables high-quality long-horizon
auto-regressive generation. For action, we introduce a normalized panoramic
Plucker ray-map representation that encodes input trajectories into pixel-level
signals, enabling highly precise and generalizable control over panoramic video
generation. Regarding reward, we move beyond learning reward functions with
external image-based models: instead, we leverage the generated 3D occupancy to
directly define rule-based dense rewards for driving compliance and safety.
Extensive experiments demonstrate that OmniNWM achieves state-of-the-art
performance in video generation, control accuracy, and long-horizon stability,
while providing a reliable closed-loop evaluation framework through
occupancy-grounded rewards. Project page is available at
https://github.com/Arlo0o/OmniNWM.
Authors (11)
Bohan Li
Zhuang Ma
Dalong Du
Baorui Peng
Zhujin Liang
Zhenqiang Liu
+5 more
Submitted
October 21, 2025
Key Contributions
OmniNWM introduces a unified framework for autonomous driving world models that addresses limitations in state, action, and reward dimensions. It achieves this by jointly generating panoramic videos of multiple modalities (RGB, semantics, depth, occupancy), employing a novel Plucker ray-map representation for precise action control, and leveraging generated 3D occupancy for reward awareness.
Business Value
Enables more robust and comprehensive understanding for autonomous driving systems, potentially leading to safer and more reliable navigation in complex environments.