Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cv 95% Match Research Paper 3D Graphics Researchers,Computer Vision Engineers,AR/VR Developers,Robotics Engineers 3 weeks ago

D$^2$GS: Depth-and-Density Guided Gaussian Splatting for Stable and Accurate Sparse-View Reconstruction

computer-vision › 3d-vision
📄 Abstract

Abstract: Recent advances in 3D Gaussian Splatting (3DGS) enable real-time, high-fidelity novel view synthesis (NVS) with explicit 3D representations. However, performance degradation and instability remain significant under sparse-view conditions. In this work, we identify two key failure modes under sparse-view conditions: overfitting in regions with excessive Gaussian density near the camera, and underfitting in distant areas with insufficient Gaussian coverage. To address these challenges, we propose a unified framework D$^2$GS, comprising two key components: a Depth-and-Density Guided Dropout strategy that suppresses overfitting by adaptively masking redundant Gaussians based on density and depth, and a Distance-Aware Fidelity Enhancement module that improves reconstruction quality in under-fitted far-field areas through targeted supervision. Moreover, we introduce a new evaluation metric to quantify the stability of learned Gaussian distributions, providing insights into the robustness of the sparse-view 3DGS. Extensive experiments on multiple datasets demonstrate that our method significantly improves both visual quality and robustness under sparse view conditions. The project page can be found at: https://insta360-research-team.github.io/DDGS-website/.

Key Contributions

Proposes D^2GS, a unified framework for stable and accurate 3D Gaussian Splatting under sparse-view conditions. Introduces Depth-and-Density Guided Dropout to prevent overfitting and Distance-Aware Fidelity Enhancement to improve reconstruction in distant areas, addressing key failure modes of existing 3DGS methods.

Business Value

Enables higher quality and more robust 3D reconstruction from limited input data, valuable for applications like virtual try-on, architectural visualization, and content creation for AR/VR.