File Download
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.18653/v1/d15-1251
- Scopus: eid_2-s2.0-84959927890
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Bayesian optimization of text representations
Title | Bayesian optimization of text representations |
---|---|
Authors | |
Issue Date | 2015 |
Citation | 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal, 17-21 September 2015. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, 2015, p. 2100-2105 How to Cite? |
Abstract | © 2015 Association for Computational Linguistics. When applying machine learning to problems in NLP, there are many choices to make about how to represent input texts. They can have a big effect on performance, but they are often uninteresting to researchers or practitioners who simply need a module that performs well. We apply sequential model-based optimization over this space of choices and show that it makes standard linear models competitive with more sophisticated, expensive state-ofthe-art methods based on latent variables or neural networks on various topic classification and sentiment analysis problems. Our approach is a first step towards black-box NLP systems that work with raw text and do not require manual tuning. |
Persistent Identifier | http://hdl.handle.net/10722/296119 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Yogatama, Dani | - |
dc.contributor.author | Kong, Lingpeng | - |
dc.contributor.author | Smith, Noah A. | - |
dc.date.accessioned | 2021-02-11T04:52:52Z | - |
dc.date.available | 2021-02-11T04:52:52Z | - |
dc.date.issued | 2015 | - |
dc.identifier.citation | 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal, 17-21 September 2015. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, 2015, p. 2100-2105 | - |
dc.identifier.uri | http://hdl.handle.net/10722/296119 | - |
dc.description.abstract | © 2015 Association for Computational Linguistics. When applying machine learning to problems in NLP, there are many choices to make about how to represent input texts. They can have a big effect on performance, but they are often uninteresting to researchers or practitioners who simply need a module that performs well. We apply sequential model-based optimization over this space of choices and show that it makes standard linear models competitive with more sophisticated, expensive state-ofthe-art methods based on latent variables or neural networks on various topic classification and sentiment analysis problems. Our approach is a first step towards black-box NLP systems that work with raw text and do not require manual tuning. | - |
dc.language | eng | - |
dc.relation.ispartof | Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing | - |
dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
dc.title | Bayesian optimization of text representations | - |
dc.type | Conference_Paper | - |
dc.description.nature | published_or_final_version | - |
dc.identifier.doi | 10.18653/v1/d15-1251 | - |
dc.identifier.scopus | eid_2-s2.0-84959927890 | - |
dc.identifier.spage | 2100 | - |
dc.identifier.epage | 2105 | - |