File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Bayesian optimization of text representations

TitleBayesian optimization of text representations
Authors
Issue Date2015
Citation
2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal, 17-21 September 2015. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, 2015, p. 2100-2105 How to Cite?
Abstract© 2015 Association for Computational Linguistics. When applying machine learning to problems in NLP, there are many choices to make about how to represent input texts. They can have a big effect on performance, but they are often uninteresting to researchers or practitioners who simply need a module that performs well. We apply sequential model-based optimization over this space of choices and show that it makes standard linear models competitive with more sophisticated, expensive state-ofthe-art methods based on latent variables or neural networks on various topic classification and sentiment analysis problems. Our approach is a first step towards black-box NLP systems that work with raw text and do not require manual tuning.
Persistent Identifierhttp://hdl.handle.net/10722/296119

 

DC FieldValueLanguage
dc.contributor.authorYogatama, Dani-
dc.contributor.authorKong, Lingpeng-
dc.contributor.authorSmith, Noah A.-
dc.date.accessioned2021-02-11T04:52:52Z-
dc.date.available2021-02-11T04:52:52Z-
dc.date.issued2015-
dc.identifier.citation2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal, 17-21 September 2015. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, 2015, p. 2100-2105-
dc.identifier.urihttp://hdl.handle.net/10722/296119-
dc.description.abstract© 2015 Association for Computational Linguistics. When applying machine learning to problems in NLP, there are many choices to make about how to represent input texts. They can have a big effect on performance, but they are often uninteresting to researchers or practitioners who simply need a module that performs well. We apply sequential model-based optimization over this space of choices and show that it makes standard linear models competitive with more sophisticated, expensive state-ofthe-art methods based on latent variables or neural networks on various topic classification and sentiment analysis problems. Our approach is a first step towards black-box NLP systems that work with raw text and do not require manual tuning.-
dc.languageeng-
dc.relation.ispartofProceedings of the 2015 Conference on Empirical Methods in Natural Language Processing-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.titleBayesian optimization of text representations-
dc.typeConference_Paper-
dc.description.naturepublished_or_final_version-
dc.identifier.doi10.18653/v1/d15-1251-
dc.identifier.scopuseid_2-s2.0-84959927890-
dc.identifier.spage2100-
dc.identifier.epage2105-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats