Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Recent advances in 3D Gaussian Splatting (3DGS) have enabled high-quality,
real-time novel-view synthesis from multi-view images. However, most existing
methods assume the object is captured in a single, static pose, resulting in
incomplete reconstructions that miss occluded or self-occluded regions. We
introduce PFGS, a pose-aware 3DGS framework that addresses the practical
challenge of reconstructing complete objects from multi-pose image captures.
Given images of an object in one main pose and several auxiliary poses, PFGS
iteratively fuses each auxiliary set into a unified 3DGS representation of the
main pose. Our pose-aware fusion strategy combines global and local
registration to merge views effectively and refine the 3DGS model. While recent
advances in 3D foundation models have improved registration robustness and
efficiency, they remain limited by high memory demands and suboptimal accuracy.
PFGS overcomes these challenges by incorporating them more intelligently into
the registration process: it leverages background features for per-pose camera
pose estimation and employs foundation models for cross-pose registration. This
design captures the best of both approaches while resolving background
inconsistency issues. Experimental results demonstrate that PFGS consistently
outperforms strong baselines in both qualitative and quantitative evaluations,
producing more complete reconstructions and higher-fidelity 3DGS models.
Authors (5)
Ting-Yu Yen
Yu-Sheng Chiu
Shih-Hsuan Hung
Peter Wonka
Hung-Kuo Chu
Submitted
October 17, 2025
Key Contributions
PFGS (Pose-Fused 3D Gaussian Splatting) enables complete multi-pose object reconstruction by iteratively fusing auxiliary poses into a unified 3DGS representation. It uses a pose-aware fusion strategy with global and local registration to reconstruct objects from multiple viewpoints, including occluded regions.
Business Value
Enables the creation of highly realistic and complete 3D models of objects, valuable for virtual try-ons, product visualization, and immersive AR/VR experiences.