Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Efficient and accurate camera pose estimation forms the foundational
requirement for dense reconstruction in autonomous navigation, robotic
perception, and virtual simulation systems. This paper addresses the challenge
via cuSfM, a CUDA-accelerated offline Structure-from-Motion system that
leverages GPU parallelization to efficiently employ computationally intensive
yet highly accurate feature extractors, generating comprehensive and
non-redundant data associations for precise camera pose estimation and globally
consistent mapping. The system supports pose optimization, mapping, prior-map
localization, and extrinsic refinement. It is designed for offline processing,
where computational resources can be fully utilized to maximize accuracy.
Experimental results demonstrate that cuSfM achieves significantly improved
accuracy and processing speed compared to the widely used COLMAP method across
various testing scenarios, while maintaining the high precision and global
consistency essential for offline SfM applications. The system is released as
an open-source Python wrapper implementation, PyCuSfM, available at
https://github.com/nvidia-isaac/pyCuSFM, to facilitate research and
applications in computer vision and robotics.
Authors (8)
Jingrui Yu
Jun Liu
Kefei Ren
Joydeep Biswas
Rurui Ye
Keqiang Wu
+2 more
Submitted
October 17, 2025
Key Contributions
cuSfM is a CUDA-accelerated offline Structure-from-Motion system that significantly improves accuracy and processing speed by leveraging GPU parallelization for computationally intensive feature extractors and data associations. It provides precise camera pose estimation and globally consistent mapping, outperforming existing methods like COLMAP.
Business Value
Enables more efficient and accurate 3D reconstruction for applications like autonomous driving, robotics, and AR/VR, reducing processing time and improving system performance.