File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: CAMP: Cross-modal adaptive message passing for text-image retrieval

TitleCAMP: Cross-modal adaptive message passing for text-image retrieval
Authors
Issue Date2019
Citation
Proceedings of the IEEE International Conference on Computer Vision, 2019, v. 2019-October, p. 5763-5772 How to Cite?
AbstractText-image cross-modal retrieval is a challenging task in the field of language and vision. Most previous approaches independently embed images and sentences into a joint embedding space and compare their similarities. However, previous approaches rarely explore the interactions between images and sentences before calculating similarities in the joint space. Intuitively, when matching between images and sentences, human beings would alternatively attend to regions in images and words in sentences, and select the most salient information considering the interaction between both modalities. In this paper, we propose Cross-modal Adaptive Message Passing (CAMP), which adaptively controls the information flow for message passing across modalities. Our approach not only takes comprehensive and fine-grained cross-modal interactions into account, but also properly handles negative pairs and irrelevant information with an adaptive gating scheme. Moreover, instead of conventional joint embedding approaches for text-image matching, we infer the matching score based on the fused features, and propose a hardest negative binary cross-entropy loss for training. Results on COCO and Flickr30k significantly surpass state-of-the-art methods, demonstrating the effectiveness of our approach.
Persistent Identifierhttp://hdl.handle.net/10722/316541
ISSN
2023 SCImago Journal Rankings: 12.263
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorWang, Zihao-
dc.contributor.authorLiu, Xihui-
dc.contributor.authorLi, Hongsheng-
dc.contributor.authorSheng, Lu-
dc.contributor.authorYan, Junjie-
dc.contributor.authorWang, Xiaogang-
dc.contributor.authorShao, Jing-
dc.date.accessioned2022-09-14T11:40:42Z-
dc.date.available2022-09-14T11:40:42Z-
dc.date.issued2019-
dc.identifier.citationProceedings of the IEEE International Conference on Computer Vision, 2019, v. 2019-October, p. 5763-5772-
dc.identifier.issn1550-5499-
dc.identifier.urihttp://hdl.handle.net/10722/316541-
dc.description.abstractText-image cross-modal retrieval is a challenging task in the field of language and vision. Most previous approaches independently embed images and sentences into a joint embedding space and compare their similarities. However, previous approaches rarely explore the interactions between images and sentences before calculating similarities in the joint space. Intuitively, when matching between images and sentences, human beings would alternatively attend to regions in images and words in sentences, and select the most salient information considering the interaction between both modalities. In this paper, we propose Cross-modal Adaptive Message Passing (CAMP), which adaptively controls the information flow for message passing across modalities. Our approach not only takes comprehensive and fine-grained cross-modal interactions into account, but also properly handles negative pairs and irrelevant information with an adaptive gating scheme. Moreover, instead of conventional joint embedding approaches for text-image matching, we infer the matching score based on the fused features, and propose a hardest negative binary cross-entropy loss for training. Results on COCO and Flickr30k significantly surpass state-of-the-art methods, demonstrating the effectiveness of our approach.-
dc.languageeng-
dc.relation.ispartofProceedings of the IEEE International Conference on Computer Vision-
dc.titleCAMP: Cross-modal adaptive message passing for text-image retrieval-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/ICCV.2019.00586-
dc.identifier.scopuseid_2-s2.0-85081884326-
dc.identifier.volume2019-October-
dc.identifier.spage5763-
dc.identifier.epage5772-
dc.identifier.isiWOS:000548549200075-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats