File Download
There are no files associated with this item.
Supplementary
-
Citations:
- Appears in Collections:
Conference Paper: On the sparsity of neural machine translation models
Title | On the sparsity of neural machine translation models |
---|---|
Authors | |
Issue Date | 2020 |
Publisher | Association for Computational Linguistics. |
Citation | Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020), Virtual Meeting, 16-20 November 2020, p. 1060–1066 How to Cite? |
Abstract | Modern neural machine translation (NMT) models employ a large number of parameters, which leads to serious over-parameterization and typically causes the underutilization of computational resources. In response to this problem, we empirically investigate whether the redundant parameters can be reused to achieve better performance. Experiments and analyses are systematically conducted on different datasets and NMT architectures. We show that: 1) the pruned parameters can be rejuvenated to improve the baseline model by up to +0.8 BLEU points; 2) the rejuvenated parameters are reallocated to enhance the ability of modeling low-level lexical information. |
Description | Short Paper - Gather Session 1A: Machine Translation and Multilinguality |
Persistent Identifier | http://hdl.handle.net/10722/287776 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Wang, Y | - |
dc.contributor.author | Wang, L | - |
dc.contributor.author | Li, VOK | - |
dc.contributor.author | Tu, Z | - |
dc.date.accessioned | 2020-10-05T12:03:07Z | - |
dc.date.available | 2020-10-05T12:03:07Z | - |
dc.date.issued | 2020 | - |
dc.identifier.citation | Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020), Virtual Meeting, 16-20 November 2020, p. 1060–1066 | - |
dc.identifier.uri | http://hdl.handle.net/10722/287776 | - |
dc.description | Short Paper - Gather Session 1A: Machine Translation and Multilinguality | - |
dc.description.abstract | Modern neural machine translation (NMT) models employ a large number of parameters, which leads to serious over-parameterization and typically causes the underutilization of computational resources. In response to this problem, we empirically investigate whether the redundant parameters can be reused to achieve better performance. Experiments and analyses are systematically conducted on different datasets and NMT architectures. We show that: 1) the pruned parameters can be rejuvenated to improve the baseline model by up to +0.8 BLEU points; 2) the rejuvenated parameters are reallocated to enhance the ability of modeling low-level lexical information. | - |
dc.language | eng | - |
dc.publisher | Association for Computational Linguistics. | - |
dc.relation.ispartof | Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020 | - |
dc.title | On the sparsity of neural machine translation models | - |
dc.type | Conference_Paper | - |
dc.identifier.email | Li, VOK: vli@eee.hku.hk | - |
dc.identifier.authority | Li, VOK=rp00150 | - |
dc.description.nature | link_to_OA_fulltext | - |
dc.identifier.doi | 10.18653/v1/2020.emnlp-main.78 | - |
dc.identifier.hkuros | 315139 | - |
dc.identifier.spage | 1060 | - |
dc.identifier.epage | 1066 | - |