Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cv 90% Match Research Paper Robotics Engineers,Autonomous Systems Developers,Computer Vision Researchers,Sensor Fusion Specialists 3 weeks ago

Targetless LiDAR-Camera Calibration with Neural Gaussian Splatting

computer-vision › 3d-vision
📄 Abstract

Abstract: Accurate LiDAR-camera calibration is crucial for multi-sensor systems. However, traditional methods often rely on physical targets, which are impractical for real-world deployment. Moreover, even carefully calibrated extrinsics can degrade over time due to sensor drift or external disturbances, necessitating periodic recalibration. To address these challenges, we present a Targetless LiDAR-Camera Calibration (TLC-Calib) that jointly optimizes sensor poses with a neural Gaussian-based scene representation. Reliable LiDAR points are frozen as anchor Gaussians to preserve global structure, while auxiliary Gaussians prevent local overfitting under noisy initialization. Our fully differentiable pipeline with photometric and geometric regularization achieves robust and generalizable calibration, consistently outperforming existing targetless methods on KITTI-360, Waymo, and FAST-LIVO2, and surpassing even the provided calibrations in rendering quality.

Key Contributions

Presents TLC-Calib, a targetless LiDAR-camera calibration method that jointly optimizes sensor poses and a neural Gaussian-based scene representation. It achieves robust and generalizable calibration without physical targets.

Business Value

Enables more reliable and cost-effective deployment of multi-sensor systems (e.g., autonomous vehicles) by simplifying and improving the calibration process, reducing downtime and increasing safety.