Learning to Reduce Scale Differences for Large-Scale Invariant Image Matching | IEEE Journals & Magazine | IEEE Xplore

Learning to Reduce Scale Differences for Large-Scale Invariant Image Matching


Abstract:

Most image matching methods perform poorly when encountering large scale changes in images. To solve this problem, we propose a Scale-Difference-Aware Image Matching meth...Show More

Abstract:

Most image matching methods perform poorly when encountering large scale changes in images. To solve this problem, we propose a Scale-Difference-Aware Image Matching method (SDAIM) that reduces image scale differences before local feature extraction, via resizing both images of an image pair according to an estimated scale ratio. In order to accurately estimate the scale ratio for the proposed SDAIM, we propose a Covisibility-Attention-Reinforced Matching module (CVARM) and then design a novel neural network, termed as Scale-Net, based on CVARM. The proposed CVARM can lay more stress on covisible areas within the image pair and suppress the distraction from those areas visible in only one image. Quantitative and qualitative experiments confirm that the proposed Scale-Net has higher scale ratio estimation accuracy and much better generalization ability compared with all the existing scale ratio estimation methods. Further experiments on image matching and relative pose estimation tasks demonstrate that our SDAIM and Scale-Net are able to greatly boost the performance of representative local features and state-of-the-art local feature matching methods.
Page(s): 1335 - 1348
Date of Publication: 28 September 2022

ISSN Information:

Funding Agency:

Citations are not available for this document.

Getting results...

References

References is not available for this document.