Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: It is highly challenging to register large-scale, heterogeneous SAR and
optical images, particularly across platforms, due to significant geometric,
radiometric, and temporal differences, which most existing methods struggle to
address. To overcome these challenges, we propose Grid-Reg, a grid-based
multimodal registration framework comprising a domain-robust descriptor
extraction network, Hybrid Siamese Correlation Metric Learning Network
(HSCMLNet), and a grid-based solver (Grid-Solver) for transformation parameter
estimation. In heterogeneous imagery with large modality gaps and geometric
differences, obtaining accurate correspondences is inherently difficult. To
robustly measure similarity between gridded patches, HSCMLNet integrates a
hybrid Siamese module with a correlation metric learning module (CMLModule)
based on equiangular unit basis vectors (EUBVs), together with a manifold
consistency loss to promote modality-invariant, discriminative feature
learning. The Grid-Solver estimates transformation parameters by minimizing a
global grid matching loss through a progressive dual-loop search strategy to
reliably find patch correspondences across entire images. Furthermore, we
curate a challenging benchmark dataset for SAR-to-optical registration using
real-world UAV MiniSAR data and Google Earth optical imagery. Extensive
experiments demonstrate that our proposed approach achieves superior
performance over state-of-the-art methods.