File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: What do recurrent neural network grammars learn about syntax?

TitleWhat do recurrent neural network grammars learn about syntax?
Authors
Issue Date2017
Citation
The 15th Conference of the European Chapter of the Association for Computational Linguistics, Valencia, Spain, 3-7 April 2017. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, 2017, v. 1, p. 1249-1258 How to Cite?
Abstract© 2017 Association for Computational Linguistics. Recurrent neural network grammars (RNNG) are a recently proposed probablistic generative modeling family for natural language. They show state-ofthe- Art language modeling and parsing performance. We investigate what information they learn, from a linguistic perspective, through various ablations to the model and the data, and by augmenting the model with an attention mechanism (GA-RNNG) to enable closer inspection. We find that explicit modeling of composition is crucial for achieving the best performance. Through the attention mechanism, we find that headedness plays a central role in phrasal representation (with the model's latent attention largely agreeing with predictions made by hand-crafted head rules, albeit with some important differences). By training grammars without nonterminal labels, we find that phrasal representations depend minimally on nonterminals, providing support for the endocentricity hypothesis.
Persistent Identifierhttp://hdl.handle.net/10722/296152

 

DC FieldValueLanguage
dc.contributor.authorKuncoro, Adhiguna-
dc.contributor.authorBallesteros, Miguel-
dc.contributor.authorKong, Lingpeng-
dc.contributor.authorDyer, Chris-
dc.contributor.authorNeubig, Graham-
dc.contributor.authorSmith, Noah A.-
dc.date.accessioned2021-02-11T04:52:57Z-
dc.date.available2021-02-11T04:52:57Z-
dc.date.issued2017-
dc.identifier.citationThe 15th Conference of the European Chapter of the Association for Computational Linguistics, Valencia, Spain, 3-7 April 2017. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, 2017, v. 1, p. 1249-1258-
dc.identifier.urihttp://hdl.handle.net/10722/296152-
dc.description.abstract© 2017 Association for Computational Linguistics. Recurrent neural network grammars (RNNG) are a recently proposed probablistic generative modeling family for natural language. They show state-ofthe- Art language modeling and parsing performance. We investigate what information they learn, from a linguistic perspective, through various ablations to the model and the data, and by augmenting the model with an attention mechanism (GA-RNNG) to enable closer inspection. We find that explicit modeling of composition is crucial for achieving the best performance. Through the attention mechanism, we find that headedness plays a central role in phrasal representation (with the model's latent attention largely agreeing with predictions made by hand-crafted head rules, albeit with some important differences). By training grammars without nonterminal labels, we find that phrasal representations depend minimally on nonterminals, providing support for the endocentricity hypothesis.-
dc.languageeng-
dc.relation.ispartofProceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.titleWhat do recurrent neural network grammars learn about syntax?-
dc.typeConference_Paper-
dc.description.naturepublished_or_final_version-
dc.identifier.doi10.18653/v1/e17-1117-
dc.identifier.scopuseid_2-s2.0-85021687903-
dc.identifier.volume1-
dc.identifier.spage1249-
dc.identifier.epage1258-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats