Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: While dynamic novel view synthesis from 2D videos has seen progress,
achieving efficient reconstruction and rendering of dynamic scenes remains a
challenging task. In this paper, we introduce Disentangled 4D Gaussian
Splatting (Disentangled4DGS), a novel representation and rendering pipeline
that achieves real-time performance without compromising visual fidelity.
Disentangled4DGS decouples the temporal and spatial components of 4D Gaussians,
avoiding the need for slicing first and four-dimensional matrix calculations in
prior methods. By projecting temporal and spatial deformations into dynamic 2D
Gaussians and deferring temporal processing, we minimize redundant computations
of 4DGS. Our approach also features a gradient-guided flow loss and temporal
splitting strategy to reduce artifacts. Experiments demonstrate a significant
improvement in rendering speed and quality, achieving 343 FPS when render
1352*1014 resolution images on a single RTX3090 while reducing storage
requirements by at least 4.5%. Our approach sets a new benchmark for dynamic
novel view synthesis, outperforming existing methods on both multi-view and
monocular dynamic scene datasets.
Authors (5)
Hao Feng
Hao Sun
Wei Xie
Zhi Zuo
Zhengzhe Liu
Key Contributions
Disentangled4DGS introduces a novel representation and rendering pipeline that achieves real-time performance (343 FPS) for dynamic novel view synthesis by decoupling temporal and spatial components of 4D Gaussians. This avoids redundant computations and minimizes artifacts, significantly improving speed and quality over prior 4DGS methods.
Business Value
Enables real-time interactive experiences with dynamic 3D environments for applications like VR/AR, virtual production, and robotics simulation, reducing rendering bottlenecks.