File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Contrastive learning for image captioning

TitleContrastive learning for image captioning
Authors
Issue Date2017
Citation
Advances in Neural Information Processing Systems, 2017, v. 2017-December, p. 899-908 How to Cite?
AbstractImage captioning, a popular topic in computer vision, has achieved substantial progress in recent years. However, the distinctiveness of natural descriptions is often overlooked in previous work. It is closely related to the quality of captions, as distinctive captions are more likely to describe images with their unique aspects. In this work, we propose a new learning method, Contrastive Learning (CL), for image captioning. Specifically, via two constraints formulated on top of a reference model, the proposed method can encourage distinctiveness, while maintaining the overall quality of the generated captions. We tested our method on two challenging datasets, where it improves the baseline model by significant margins. We also showed in our studies that the proposed method is generic and can be used for models with various structures.
Persistent Identifierhttp://hdl.handle.net/10722/352168
ISSN
2020 SCImago Journal Rankings: 1.399

 

DC FieldValueLanguage
dc.contributor.authorDai, Bo-
dc.contributor.authorLin, Dahua-
dc.date.accessioned2024-12-16T03:57:06Z-
dc.date.available2024-12-16T03:57:06Z-
dc.date.issued2017-
dc.identifier.citationAdvances in Neural Information Processing Systems, 2017, v. 2017-December, p. 899-908-
dc.identifier.issn1049-5258-
dc.identifier.urihttp://hdl.handle.net/10722/352168-
dc.description.abstractImage captioning, a popular topic in computer vision, has achieved substantial progress in recent years. However, the distinctiveness of natural descriptions is often overlooked in previous work. It is closely related to the quality of captions, as distinctive captions are more likely to describe images with their unique aspects. In this work, we propose a new learning method, Contrastive Learning (CL), for image captioning. Specifically, via two constraints formulated on top of a reference model, the proposed method can encourage distinctiveness, while maintaining the overall quality of the generated captions. We tested our method on two challenging datasets, where it improves the baseline model by significant margins. We also showed in our studies that the proposed method is generic and can be used for models with various structures.-
dc.languageeng-
dc.relation.ispartofAdvances in Neural Information Processing Systems-
dc.titleContrastive learning for image captioning-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85047005782-
dc.identifier.volume2017-December-
dc.identifier.spage899-
dc.identifier.epage908-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats