Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cv 96% Match Research Paper Robotics engineers,Autonomous driving developers,AR/VR developers,3D mapping specialists 1 day ago

LiDAR-VGGT: Cross-Modal Coarse-to-Fine Fusion for Globally Consistent and Metric-Scale Dense Mapping

computer-vision › 3d-vision
📄 Abstract

Abstract: Reconstructing large-scale colored point clouds is an important task in robotics, supporting perception, navigation, and scene understanding. Despite advances in LiDAR inertial visual odometry (LIVO), its performance remains highly sensitive to extrinsic calibration. Meanwhile, 3D vision foundation models, such as VGGT, suffer from limited scalability in large environments and inherently lack metric scale. To overcome these limitations, we propose LiDAR-VGGT, a novel framework that tightly couples LiDAR inertial odometry with the state-of-the-art VGGT model through a two-stage coarse- to-fine fusion pipeline: First, a pre-fusion module with robust initialization refinement efficiently estimates VGGT poses and point clouds with coarse metric scale within each session. Then, a post-fusion module enhances cross-modal 3D similarity transformation, using bounding-box-based regularization to reduce scale distortions caused by inconsistent FOVs between LiDAR and camera sensors. Extensive experiments across multiple datasets demonstrate that LiDAR-VGGT achieves dense, globally consistent colored point clouds and outperforms both VGGT-based methods and LIVO baselines. The implementation of our proposed novel color point cloud evaluation toolkit will be released as open source.
Authors (6)
Lijie Wang
Lianjie Guo
Ziyi Xu
Qianhao Wang
Fei Gao
Xieyuanli Chen
Submitted
November 3, 2025
arXiv Category
cs.RO
arXiv PDF

Key Contributions

LiDAR-VGGT proposes a novel framework that tightly couples LiDAR inertial odometry with the VGGT model for globally consistent and metric-scale dense mapping. It addresses the limitations of existing methods by employing a two-stage coarse-to-fine fusion pipeline that refines poses and point clouds, and enhances cross-modal 3D similarity transformation to reduce scale distortions, enabling more accurate large-scale 3D reconstructions.

Business Value

Improves the accuracy and reliability of 3D mapping for autonomous systems and AR/VR applications, reducing the need for manual calibration and enabling more robust navigation and scene understanding.