File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Good for Misconceived Reasons: An Empirical Revisiting on the Need for Visual Context in Multimodal Machine Translation

TitleGood for Misconceived Reasons: An Empirical Revisiting on the Need for Visual Context in Multimodal Machine Translation
Authors
Issue Date2021
PublisherAssociation for Computational Linguistics.
Citation
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP 2021), Virtual Meeting, Bangkok, Thailand, 1-6 August 2021, v. 1: Long Papers, p. 6153-6166 How to Cite?
AbstractA neural multimodal machine translation (MMT) system is one that aims to perform better translation by extending conventional text-only translation models with multimodal information. Many recent studies report improvements when equipping their models with the multimodal module, despite the controversy of whether such improvements indeed come from the multimodal part. We revisit the contribution of multimodal information in MMT by devising two interpretable MMT models. To our surprise, although our models replicate similar gains as recently developed multimodal-integrated systems achieved, our models learn to ignore the multimodal information. Upon further investigation, we discover that the improvements achieved by the multimodal models over text-only counterparts are in fact results of the regularization effect. We report empirical findings that highlight the importance of MMT models’ interpretability, and discuss how our findings will benefit future research.
DescriptionPoster 3R: Language Grounding to Vision, Robotics and Beyond - Anthology ID: 2021.acl-long.480
Persistent Identifierhttp://hdl.handle.net/10722/304335
ISBN

 

DC FieldValueLanguage
dc.contributor.authorWu, Z-
dc.contributor.authorKong, L-
dc.contributor.authorBi, W-
dc.contributor.authorLi, X-
dc.contributor.authorKao, CM-
dc.date.accessioned2021-09-23T08:58:36Z-
dc.date.available2021-09-23T08:58:36Z-
dc.date.issued2021-
dc.identifier.citationProceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP 2021), Virtual Meeting, Bangkok, Thailand, 1-6 August 2021, v. 1: Long Papers, p. 6153-6166-
dc.identifier.isbn9781954085527-
dc.identifier.urihttp://hdl.handle.net/10722/304335-
dc.descriptionPoster 3R: Language Grounding to Vision, Robotics and Beyond - Anthology ID: 2021.acl-long.480-
dc.description.abstractA neural multimodal machine translation (MMT) system is one that aims to perform better translation by extending conventional text-only translation models with multimodal information. Many recent studies report improvements when equipping their models with the multimodal module, despite the controversy of whether such improvements indeed come from the multimodal part. We revisit the contribution of multimodal information in MMT by devising two interpretable MMT models. To our surprise, although our models replicate similar gains as recently developed multimodal-integrated systems achieved, our models learn to ignore the multimodal information. Upon further investigation, we discover that the improvements achieved by the multimodal models over text-only counterparts are in fact results of the regularization effect. We report empirical findings that highlight the importance of MMT models’ interpretability, and discuss how our findings will benefit future research.-
dc.languageeng-
dc.publisherAssociation for Computational Linguistics.-
dc.relation.ispartofProceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.titleGood for Misconceived Reasons: An Empirical Revisiting on the Need for Visual Context in Multimodal Machine Translation-
dc.typeConference_Paper-
dc.identifier.emailKong, L: lpk@cs.hku.hk-
dc.identifier.emailKao, CM: kao@cs.hku.hk-
dc.identifier.authorityKong, L=rp02775-
dc.identifier.authorityKao, CM=rp00123-
dc.description.naturepublished_or_final_version-
dc.identifier.doi10.18653/v1/2021.acl-long.480-
dc.identifier.hkuros324951-
dc.identifier.volume1-
dc.identifier.spage6153-
dc.identifier.epage6166-
dc.publisher.placeStroudsburg, PA, USA-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats