File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: A neural compositional paradigm for image captioning

TitleA neural compositional paradigm for image captioning
Authors
Issue Date2018
Citation
Advances in Neural Information Processing Systems, 2018, v. 2018-December, p. 658-668 How to Cite?
AbstractMainstream captioning models often follow a sequential structure to generate captions, leading to issues such as introduction of irrelevant semantics, lack of diversity in the generated captions, and inadequate generalization performance. In this paper, we present an alternative paradigm for image captioning, which factorizes the captioning procedure into two stages: (1) extracting an explicit semantic representation from the given image; and (2) constructing the caption based on a recursive compositional procedure in a bottom-up manner. Compared to conventional ones, our paradigm better preserves the semantic content through an explicit factorization of semantics and syntax. By using the compositional generation procedure, caption construction follows a recursive structure, which naturally fits the properties of human language. Moreover, the proposed compositional procedure requires less data to train, generalizes better, and yields more diverse captions.
Persistent Identifierhttp://hdl.handle.net/10722/352173
ISSN
2020 SCImago Journal Rankings: 1.399

 

DC FieldValueLanguage
dc.contributor.authorDai, Bo-
dc.contributor.authorFidler, Sanja-
dc.contributor.authorLin, Dahua-
dc.date.accessioned2024-12-16T03:57:07Z-
dc.date.available2024-12-16T03:57:07Z-
dc.date.issued2018-
dc.identifier.citationAdvances in Neural Information Processing Systems, 2018, v. 2018-December, p. 658-668-
dc.identifier.issn1049-5258-
dc.identifier.urihttp://hdl.handle.net/10722/352173-
dc.description.abstractMainstream captioning models often follow a sequential structure to generate captions, leading to issues such as introduction of irrelevant semantics, lack of diversity in the generated captions, and inadequate generalization performance. In this paper, we present an alternative paradigm for image captioning, which factorizes the captioning procedure into two stages: (1) extracting an explicit semantic representation from the given image; and (2) constructing the caption based on a recursive compositional procedure in a bottom-up manner. Compared to conventional ones, our paradigm better preserves the semantic content through an explicit factorization of semantics and syntax. By using the compositional generation procedure, caption construction follows a recursive structure, which naturally fits the properties of human language. Moreover, the proposed compositional procedure requires less data to train, generalizes better, and yields more diverse captions.-
dc.languageeng-
dc.relation.ispartofAdvances in Neural Information Processing Systems-
dc.titleA neural compositional paradigm for image captioning-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85064847276-
dc.identifier.volume2018-December-
dc.identifier.spage658-
dc.identifier.epage668-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats