Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Generating realistic 3D objects from single-view images requires natural
appearance, 3D consistency, and the ability to capture multiple plausible
interpretations of unseen regions. Existing approaches often rely on
fine-tuning pretrained 2D diffusion models or directly generating 3D
information through fast network inference or 3D Gaussian Splatting, but their
results generally suffer from poor multiview consistency and lack geometric
detail. To tackle these issues, we present a novel method that seamlessly
integrates geometry and perception information without requiring additional
model training to reconstruct detailed 3D objects from a single image.
Specifically, we incorporate geometry and perception priors to initialize the
Gaussian branches and guide their parameter optimization. The geometry prior
captures the rough 3D shapes, while the perception prior utilizes the 2D
pretrained diffusion model to enhance multiview information. Subsequently, we
introduce a stable Score Distillation Sampling for fine-grained prior
distillation to ensure effective knowledge transfer. The model is further
enhanced by a reprojection-based strategy that enforces depth consistency.
Experimental results show that we outperform existing methods on novel view
synthesis and 3D reconstruction, demonstrating robust and consistent 3D object
generation.