File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Variational smoothing in recurrent neural network language models

TitleVariational smoothing in recurrent neural network language models
Authors
Issue Date2019
Citation
7th International Conference on Learning Representations (ICLR 2019), New Orleans, LA, 6-9 May 2019 How to Cite?
AbstractWe present a new theoretical perspective of data noising in recurrent neural network language models (Xie et al., 2017). We show that each variant of data noising is an instance of Bayesian recurrent neural networks with a particular variational distribution (i.e., a mixture of Gaussians whose weights depend on statistics derived from the corpus such as the unigram distribution). We use this insight to propose a more principled method to apply at prediction time and propose natural extensions to data noising under the variational framework. In particular, we propose variational smoothing with tied input and output embedding matrices and an element-wise variational smoothing method. We empirically verify our analysis on two benchmark language modeling datasets and demonstrate performance improvements over existing data noising methods.
Persistent Identifierhttp://hdl.handle.net/10722/296275

 

DC FieldValueLanguage
dc.contributor.authorKong, Lingpeng-
dc.contributor.authorMelis, Gabor-
dc.contributor.authorLing, Wang-
dc.contributor.authorYu, Lei-
dc.contributor.authorYogatama, Dani-
dc.date.accessioned2021-02-11T04:53:13Z-
dc.date.available2021-02-11T04:53:13Z-
dc.date.issued2019-
dc.identifier.citation7th International Conference on Learning Representations (ICLR 2019), New Orleans, LA, 6-9 May 2019-
dc.identifier.urihttp://hdl.handle.net/10722/296275-
dc.description.abstractWe present a new theoretical perspective of data noising in recurrent neural network language models (Xie et al., 2017). We show that each variant of data noising is an instance of Bayesian recurrent neural networks with a particular variational distribution (i.e., a mixture of Gaussians whose weights depend on statistics derived from the corpus such as the unigram distribution). We use this insight to propose a more principled method to apply at prediction time and propose natural extensions to data noising under the variational framework. In particular, we propose variational smoothing with tied input and output embedding matrices and an element-wise variational smoothing method. We empirically verify our analysis on two benchmark language modeling datasets and demonstrate performance improvements over existing data noising methods.-
dc.languageeng-
dc.relation.ispartof7th International Conference on Learning Representations (ICLR 2019)-
dc.titleVariational smoothing in recurrent neural network language models-
dc.typeConference_Paper-
dc.description.naturelink_to_OA_fulltext-
dc.identifier.scopuseid_2-s2.0-85083950789-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats