File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Meta-learning for low-resource neural machine translation

TitleMeta-learning for low-resource neural machine translation
Authors
Issue Date2018
PublisherAssociation for Computational Linguistics.
Citation
Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Brussels, Belgium, 31 October - 4 November 2018, p. 3622-3631 How to Cite?
AbstractIn this paper, we propose to extend the recently introduced model-agnostic meta-learning algorithm (MAML, Finn, et al., 2017) for low-resource neural machine translation (NMT). We frame low-resource translation as a meta-learning problem where we learn to adapt to low-resource languages based on multilingual high-resource language tasks. We use the universal lexical representation (Gu et al., 2018b) to overcome the input-output mismatch across different languages. We evaluate the proposed meta-learning strategy using eighteen European languages (Bg, Cs, Da, De, El, Es, Et, Fr, Hu, It, Lt, Nl, Pl, Pt, Sk, Sl, Sv and Ru) as source tasks and five diverse languages (Ro,Lv, Fi, Tr and Ko) as target tasks. We show that the proposed approach significantly outperforms the multilingual, transfer learning based approach (Zoph et al., 2016) and enables us to train a competitive NMT system with only a fraction of training examples. For instance, the proposed approach can achieve as high as 22.04 BLEU on Romanian-English WMT’16 by seeing only 16,000 translated words (~600 parallel sentences)
Persistent Identifierhttp://hdl.handle.net/10722/278334

 

DC FieldValueLanguage
dc.contributor.authorGu, J-
dc.contributor.authorWang, Y-
dc.contributor.authorChen, Y-
dc.contributor.authorLi, VOK-
dc.contributor.authorCho, K-
dc.date.accessioned2019-10-04T08:11:59Z-
dc.date.available2019-10-04T08:11:59Z-
dc.date.issued2018-
dc.identifier.citationProceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Brussels, Belgium, 31 October - 4 November 2018, p. 3622-3631-
dc.identifier.urihttp://hdl.handle.net/10722/278334-
dc.description.abstractIn this paper, we propose to extend the recently introduced model-agnostic meta-learning algorithm (MAML, Finn, et al., 2017) for low-resource neural machine translation (NMT). We frame low-resource translation as a meta-learning problem where we learn to adapt to low-resource languages based on multilingual high-resource language tasks. We use the universal lexical representation (Gu et al., 2018b) to overcome the input-output mismatch across different languages. We evaluate the proposed meta-learning strategy using eighteen European languages (Bg, Cs, Da, De, El, Es, Et, Fr, Hu, It, Lt, Nl, Pl, Pt, Sk, Sl, Sv and Ru) as source tasks and five diverse languages (Ro,Lv, Fi, Tr and Ko) as target tasks. We show that the proposed approach significantly outperforms the multilingual, transfer learning based approach (Zoph et al., 2016) and enables us to train a competitive NMT system with only a fraction of training examples. For instance, the proposed approach can achieve as high as 22.04 BLEU on Romanian-English WMT’16 by seeing only 16,000 translated words (~600 parallel sentences)-
dc.languageeng-
dc.publisherAssociation for Computational Linguistics.-
dc.relation.ispartofConference on Empirical Methods in Natural Language Processing (EMNLP) Proceedings-
dc.titleMeta-learning for low-resource neural machine translation-
dc.typeConference_Paper-
dc.identifier.emailLi, VOK: vli@eee.hku.hk-
dc.identifier.authorityLi, VOK=rp00150-
dc.description.naturelink_to_OA_fulltext-
dc.identifier.doi10.18653/v1/D18-1398-
dc.identifier.hkuros306537-
dc.identifier.spage3622-
dc.identifier.epage3631-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats