File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Segmental recurrent neural networks

TitleSegmental recurrent neural networks
Authors
Issue Date2016
Citation
4th International Conference on Learning Representations, ICLR 2016 - Conference Track Proceedings, 2016 How to Cite?
Abstract© ICLR 2016: San Juan, Puerto Rico. All Rights Reserved. We introduce segmental recurrent neural networks (SRNNs) which define, given an input sequence, a joint probability distribution over segmentations of the input and labelings of the segments. Representations of the input segments (i.e., contiguous subsequences of the input) are computed by encoding their constituent tokens using bidirectional recurrent neural nets, and these “segment embeddings” are used to define compatibility scores with output labels. These local compatibility scores are integrated using a global semi-Markov conditional random field. Both fully supervised training—in which segment boundaries and labels are observed—as well as partially supervised training—in which segment boundaries are latent—are straightforward. Experiments on handwriting recognition and joint Chinese word segmentation/POS tagging show that, compared to models that do not explicitly represent segments such as BIO tagging schemes and connectionist temporal classification (CTC), SRNNs obtain substantially higher accuracies.
Persistent Identifierhttp://hdl.handle.net/10722/296009

 

DC FieldValueLanguage
dc.contributor.authorKong, Lingpeng-
dc.contributor.authorDyer, Chris-
dc.contributor.authorSmith, Noah A.-
dc.date.accessioned2021-02-11T04:52:38Z-
dc.date.available2021-02-11T04:52:38Z-
dc.date.issued2016-
dc.identifier.citation4th International Conference on Learning Representations, ICLR 2016 - Conference Track Proceedings, 2016-
dc.identifier.urihttp://hdl.handle.net/10722/296009-
dc.description.abstract© ICLR 2016: San Juan, Puerto Rico. All Rights Reserved. We introduce segmental recurrent neural networks (SRNNs) which define, given an input sequence, a joint probability distribution over segmentations of the input and labelings of the segments. Representations of the input segments (i.e., contiguous subsequences of the input) are computed by encoding their constituent tokens using bidirectional recurrent neural nets, and these “segment embeddings” are used to define compatibility scores with output labels. These local compatibility scores are integrated using a global semi-Markov conditional random field. Both fully supervised training—in which segment boundaries and labels are observed—as well as partially supervised training—in which segment boundaries are latent—are straightforward. Experiments on handwriting recognition and joint Chinese word segmentation/POS tagging show that, compared to models that do not explicitly represent segments such as BIO tagging schemes and connectionist temporal classification (CTC), SRNNs obtain substantially higher accuracies.-
dc.languageeng-
dc.relation.ispartof4th International Conference on Learning Representations, ICLR 2016 - Conference Track Proceedings-
dc.titleSegmental recurrent neural networks-
dc.typeConference_Paper-
dc.description.naturelink_to_OA_fulltext-
dc.identifier.scopuseid_2-s2.0-85083953994-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats