Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: We introduce Diff4Splat, a feed-forward method that synthesizes controllable
and explicit 4D scenes from a single image. Our approach unifies the generative
priors of video diffusion models with geometry and motion constraints learned
from large-scale 4D datasets. Given a single input image, a camera trajectory,
and an optional text prompt, Diff4Splat directly predicts a deformable 3D
Gaussian field that encodes appearance, geometry, and motion, all in a single
forward pass, without test-time optimization or post-hoc refinement. At the
core of our framework lies a video latent transformer, which augments video
diffusion models to jointly capture spatio-temporal dependencies and predict
time-varying 3D Gaussian primitives. Training is guided by objectives on
appearance fidelity, geometric accuracy, and motion consistency, enabling
Diff4Splat to synthesize high-quality 4D scenes in 30 seconds. We demonstrate
the effectiveness of Diff4Splatacross video generation, novel view synthesis,
and geometry extraction, where it matches or surpasses optimization-based
methods for dynamic scene synthesis while being significantly more efficient.
Authors (11)
Panwang Pan
Chenguo Lin
Jingjing Zhao
Chenxin Li
Yuchen Lin
Haopeng Li
+5 more
Submitted
November 1, 2025
Key Contributions
Diff4Splat presents a feed-forward method for synthesizing controllable and explicit 4D scenes from a single image by unifying video diffusion models with 4D data constraints. It directly predicts a deformable 3D Gaussian field encoding appearance, geometry, and motion, eliminating test-time optimization. The core innovation is a video latent transformer that captures spatio-temporal dependencies for predicting time-varying 3D Gaussian primitives, enabling high-quality 4D scene generation.
Business Value
Enables rapid creation of realistic and dynamic 3D environments from single images, accelerating content creation for VR/AR, gaming, and virtual production.