Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: We present DriveGen3D, a novel framework for generating high-quality and
highly controllable dynamic 3D driving scenes that addresses critical
limitations in existing methodologies. Current approaches to driving scene
synthesis either suffer from prohibitive computational demands for extended
temporal generation, focus exclusively on prolonged video synthesis without 3D
representation, or restrict themselves to static single-scene reconstruction.
Our work bridges this methodological gap by integrating accelerated long-term
video generation with large-scale dynamic scene reconstruction through
multimodal conditional control. DriveGen3D introduces a unified pipeline
consisting of two specialized components: FastDrive-DiT, an efficient video
diffusion transformer for high-resolution, temporally coherent video synthesis
under text and Bird's-Eye-View (BEV) layout guidance; and FastRecon3D, a
feed-forward reconstruction module that rapidly builds 3D Gaussian
representations across time, ensuring spatial-temporal consistency. Together,
these components enable real-time generation of extended driving videos (up to
$424\times800$ at 12 FPS) and corresponding dynamic 3D scenes, achieving SSIM
of 0.811 and PSNR of 22.84 on novel view synthesis, all while maintaining
parameter efficiency.
Authors (16)
Weijie Wang
Jiagang Zhu
Zeyu Zhang
Xiaofeng Wang
Zheng Zhu
Guosheng Zhao
+10 more
Submitted
October 17, 2025
Key Contributions
DriveGen3D presents a novel framework for generating high-quality, controllable dynamic 3D driving scenes by integrating accelerated video diffusion (FastDrive-DiT) with efficient 3D reconstruction (FastRecon3D). It addresses limitations of computational cost and static scene focus in prior methods, enabling feed-forward generation of extended temporal sequences with multimodal conditional control (text, BEV).
Business Value
Significantly accelerates the creation of realistic and dynamic 3D environments for training and testing autonomous driving systems, reducing simulation costs and development time.