Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: When viewing a 3D Gaussian Splatting (3DGS) model from camera positions
significantly outside the training data distribution, substantial visual noise
commonly occurs. These artifacts result from the lack of training data in these
extrapolated regions, leading to uncertain density, color, and geometry
predictions from the model.
To address this issue, we propose a novel real-time render-aware filtering
method. Our approach leverages sensitivity scores derived from intermediate
gradients, explicitly targeting instabilities caused by anisotropic
orientations rather than isotropic variance. This filtering method directly
addresses the core issue of generative uncertainty, allowing 3D reconstruction
systems to maintain high visual fidelity even when users freely navigate
outside the original training viewpoints.
Experimental evaluation demonstrates that our method substantially improves
visual quality, realism, and consistency compared to existing Neural Radiance
Field (NeRF)-based approaches such as BayesRays. Critically, our filter
seamlessly integrates into existing 3DGS rendering pipelines in real-time,
unlike methods that require extensive post-hoc retraining or fine-tuning.
Code and results at https://damian-bowness.github.io/EV3DGS
Authors (2)
Damian Bowness
Charalambos Poullis
Submitted
October 22, 2025
Key Contributions
This paper proposes a real-time render-aware filtering method for 3D Gaussian Splatting (3DGS) to address visual noise from out-of-distribution camera poses. By analyzing gradient sensitivity related to anisotropic orientations, the filter targets instabilities, significantly improving visual quality, realism, and consistency compared to existing methods like NeRF.
Business Value
Enables more robust and visually appealing 3D reconstructions and virtual environments, especially for applications requiring free navigation or user-generated content. This enhances realism in VR/AR and simplifies the creation of high-fidelity 3D assets.