Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cv 95% Match Research Paper 3D Graphics Researchers,Computer Vision Engineers,AR/VR Developers,Game Developers 3 weeks ago

ReSplat: Learning Recurrent Gaussian Splats

computer-vision › 3d-vision
📄 Abstract

Abstract: While feed-forward Gaussian splatting models provide computational efficiency and effectively handle sparse input settings, their performance is fundamentally limited by the reliance on a single forward pass during inference. We propose ReSplat, a feed-forward recurrent Gaussian splatting model that iteratively refines 3D Gaussians without explicitly computing gradients. Our key insight is that the Gaussian splatting rendering error serves as a rich feedback signal, guiding the recurrent network to learn effective Gaussian updates. This feedback signal naturally adapts to unseen data distributions at test time, enabling robust generalization. To initialize the recurrent process, we introduce a compact reconstruction model that operates in a $16 \times$ subsampled space, producing $16 \times$ fewer Gaussians than previous per-pixel Gaussian models. This substantially reduces computational overhead and allows for efficient Gaussian updates. Extensive experiments across varying of input views (2, 8, 16), resolutions ($256 \times 256$ to $540 \times 960$), and datasets (DL3DV and RealEstate10K) demonstrate that our method achieves state-of-the-art performance while significantly reducing the number of Gaussians and improving the rendering speed. Our project page is at https://haofeixu.github.io/resplat/.

Key Contributions

Introduces ReSplat, a feed-forward recurrent Gaussian splatting model that iteratively refines 3D Gaussians using rendering error as a feedback signal, without explicit gradient computation. It uses a compact initialization model for efficiency and demonstrates robust generalization.

Business Value

Enables more efficient and robust generation of 3D assets and scenes from limited input, benefiting AR/VR content creation, gaming, and simulation.