File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Learning scale awareness in keypoint extraction and description

TitleLearning scale awareness in keypoint extraction and description
Authors
Keywords3D reconstruction
Image matching
Keypoint description
Keypoint detection
Structure from motion
Issue Date2022
Citation
Pattern Recognition, 2022, v. 121, article no. 108221 How to Cite?
AbstractTo recover relative camera motion accurately and robustly, establishing a set of point-to-point correspondences in the pixel space is an essential yet challenging task in computer vision. Even though multi-scale design philosophy has been used with significant success in computer vision tasks, such as object detection and semantic segmentation, learning-based image matching has not been fully exploited. In this work, we explore a scale awareness learning approach in finding pixel-level correspondences based on the intuition that keypoints need to be extracted and described on an appropriate scale. With that insight, we propose a novel scale-aware network and then develop a new fusion scheme that derives high-consistency response maps and high-precision descriptions. We also revise the Second Order Similarity Regularization (SOSR) to make it more effective for the end-to-end image matching network, which leads to significant improvement in local feature descriptions. Experimental results run on multiple datasets demonstrate that our approach performs better than state-of-the-art methods under multiple criteria.
Persistent Identifierhttp://hdl.handle.net/10722/315203
ISSN
2021 Impact Factor: 8.518
2020 SCImago Journal Rankings: 1.492
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorShen, Xuelun-
dc.contributor.authorWang, Cheng-
dc.contributor.authorLi, Xin-
dc.contributor.authorPeng, Yifan-
dc.contributor.authorHe, Zijian-
dc.contributor.authorWen, Chenglu-
dc.contributor.authorCheng, Ming-
dc.date.accessioned2022-08-05T10:18:02Z-
dc.date.available2022-08-05T10:18:02Z-
dc.date.issued2022-
dc.identifier.citationPattern Recognition, 2022, v. 121, article no. 108221-
dc.identifier.issn0031-3203-
dc.identifier.urihttp://hdl.handle.net/10722/315203-
dc.description.abstractTo recover relative camera motion accurately and robustly, establishing a set of point-to-point correspondences in the pixel space is an essential yet challenging task in computer vision. Even though multi-scale design philosophy has been used with significant success in computer vision tasks, such as object detection and semantic segmentation, learning-based image matching has not been fully exploited. In this work, we explore a scale awareness learning approach in finding pixel-level correspondences based on the intuition that keypoints need to be extracted and described on an appropriate scale. With that insight, we propose a novel scale-aware network and then develop a new fusion scheme that derives high-consistency response maps and high-precision descriptions. We also revise the Second Order Similarity Regularization (SOSR) to make it more effective for the end-to-end image matching network, which leads to significant improvement in local feature descriptions. Experimental results run on multiple datasets demonstrate that our approach performs better than state-of-the-art methods under multiple criteria.-
dc.languageeng-
dc.relation.ispartofPattern Recognition-
dc.subject3D reconstruction-
dc.subjectImage matching-
dc.subjectKeypoint description-
dc.subjectKeypoint detection-
dc.subjectStructure from motion-
dc.titleLearning scale awareness in keypoint extraction and description-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1016/j.patcog.2021.108221-
dc.identifier.scopuseid_2-s2.0-85112626949-
dc.identifier.volume121-
dc.identifier.spagearticle no. 108221-
dc.identifier.epagearticle no. 108221-
dc.identifier.isiWOS:000697551500007-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats