Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Accurate 3D reconstruction in visually-degraded underwater environments
remains a formidable challenge. Single-modality approaches are insufficient:
vision-based methods fail due to poor visibility and geometric constraints,
while sonar is crippled by inherent elevation ambiguity and low resolution.
Consequently, prior fusion technique relies on heuristics and flawed geometric
assumptions, leading to significant artifacts and an inability to model complex
scenes. In this paper, we introduce SonarSweep, a novel, end-to-end deep
learning framework that overcomes these limitations by adapting the principled
plane sweep algorithm for cross-modal fusion between sonar and visual data.
Extensive experiments in both high-fidelity simulation and real-world
environments demonstrate that SonarSweep consistently generates dense and
accurate depth maps, significantly outperforming state-of-the-art methods
across challenging conditions, particularly in high turbidity. To foster
further research, we will publicly release our code and a novel dataset
featuring synchronized stereo-camera and sonar data, the first of its kind.
Authors (5)
Lingpeng Chen
Jiakun Tang
Apple Pui-Yi Chui
Ziyang Hong
Junfeng Wu
Submitted
November 1, 2025
Key Contributions
Introduces SonarSweep, a novel end-to-end deep learning framework that fuses sonar and visual data using an adapted plane sweep algorithm for robust 3D reconstruction in visually degraded underwater environments. It overcomes limitations of single-modality approaches and prior fusion techniques by avoiding heuristics and flawed geometric assumptions.
Business Value
Enables more reliable and accurate 3D mapping of underwater environments, crucial for autonomous underwater vehicles (AUVs), marine research, infrastructure inspection, and resource exploration.