File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: SCNet: Learning Semantic Correspondence

TitleSCNet: Learning Semantic Correspondence
Authors
Issue Date2017
PublisherInstitute of Electrical and Electronics Engineers. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1000149
Citation
Proceedings of 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22-29 October 2017, p. 1849-1858 How to Cite?
AbstractThis paper addresses the problem of establishing semantic correspondences between images depicting different instances of the same object or scene category. Previous approaches focus on either combining a spatial regularizer with hand-crafted features, or learning a correspondence model for appearance only. We propose instead a convolutional neural network architecture, called SCNet, for learning a geometrically plausible model for semantic correspondence. SCNet uses region proposals as matching primitives, and explicitly incorporates geometric consistency in its loss function. It is trained on image pairs obtained from the PASCAL VOC 2007 keypoint dataset, and a comparative evaluation on several standard benchmarks demonstrates that the proposed approach substantially outperforms both recent deep learning architectures and previous methods based on hand-crafted features.
DescriptionRecognition: paper no. 37
Persistent Identifierhttp://hdl.handle.net/10722/246607
ISSN
2020 SCImago Journal Rankings: 4.133
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorHan, K-
dc.contributor.authorRezende, RS-
dc.contributor.authorHam, B-
dc.contributor.authorWong, KKY-
dc.contributor.authorCho, M-
dc.contributor.authorSchmid, C-
dc.contributor.authorPonce, J-
dc.date.accessioned2017-09-18T02:31:28Z-
dc.date.available2017-09-18T02:31:28Z-
dc.date.issued2017-
dc.identifier.citationProceedings of 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22-29 October 2017, p. 1849-1858-
dc.identifier.issn1550-5499-
dc.identifier.urihttp://hdl.handle.net/10722/246607-
dc.descriptionRecognition: paper no. 37-
dc.description.abstractThis paper addresses the problem of establishing semantic correspondences between images depicting different instances of the same object or scene category. Previous approaches focus on either combining a spatial regularizer with hand-crafted features, or learning a correspondence model for appearance only. We propose instead a convolutional neural network architecture, called SCNet, for learning a geometrically plausible model for semantic correspondence. SCNet uses region proposals as matching primitives, and explicitly incorporates geometric consistency in its loss function. It is trained on image pairs obtained from the PASCAL VOC 2007 keypoint dataset, and a comparative evaluation on several standard benchmarks demonstrates that the proposed approach substantially outperforms both recent deep learning architectures and previous methods based on hand-crafted features.-
dc.languageeng-
dc.publisherInstitute of Electrical and Electronics Engineers. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1000149-
dc.relation.ispartofIEEE International Conference on Computer Vision (ICCV) Proceedings-
dc.rights©2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.-
dc.titleSCNet: Learning Semantic Correspondence-
dc.typeConference_Paper-
dc.identifier.emailWong, KKY: kykwong@cs.hku.hk-
dc.identifier.authorityWong, KKY=rp01393-
dc.description.naturepostprint-
dc.identifier.doi10.1109/ICCV.2017.203-
dc.identifier.scopuseid_2-s2.0-85041907470-
dc.identifier.hkuros276764-
dc.identifier.spage1849-
dc.identifier.epage1858-
dc.identifier.isiWOS:000425498401096-
dc.publisher.placeUnited States-
dc.identifier.issnl1550-5499-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats