Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Robots benefit from high-fidelity reconstructions of their environment, which
should be geometrically accurate and photorealistic to support downstream
tasks. While this can be achieved by building distance fields from range
sensors and radiance fields from cameras, realising scalable incremental
mapping of both fields consistently and at the same time with high quality is
challenging. In this paper, we propose a novel map representation that unifies
a continuous signed distance field and a Gaussian splatting radiance field
within an elastic and compact point-based implicit neural map. By enforcing
geometric consistency between these fields, we achieve mutual improvements by
exploiting both modalities. We present a novel LiDAR-visual SLAM system called
PINGS using the proposed map representation and evaluate it on several
challenging large-scale datasets. Experimental results demonstrate that PINGS
can incrementally build globally consistent distance and radiance fields
encoded with a compact set of neural points. Compared to state-of-the-art
methods, PINGS achieves superior photometric and geometric rendering at novel
views by constraining the radiance field with the distance field. Furthermore,
by utilizing dense photometric cues and multi-view consistency from the
radiance field, PINGS produces more accurate distance fields, leading to
improved odometry estimation and mesh reconstruction. We also provide an
open-source implementation of PING at: https://github.com/PRBonn/PINGS.