File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Segmental recurrent neural networks for end-to-end speech recognition

TitleSegmental recurrent neural networks for end-to-end speech recognition
Authors
KeywordsEnd-to-end speech recognition
Segmental CRF
Recurrent neural networks
Issue Date2016
Citation
INTERSPEECH 2016, San Francisco, CA, 8-12 September 2016. In Proceedings of the 17th Annual Conference of the International Speech Communication Association (INTERSPEECH 2016), 2016, p. 385-389 How to Cite?
AbstractCopyright © 2016 ISCA. We study the segmental recurrent neural network for end-to-end acoustic modelling. This model connects the segmental conditional random field (CRF) with a recurrent neural network (RNN) used for feature extraction. Compared to most previous CRF-based acoustic models, it does not rely on an external system to provide features or segmentation boundaries. Instead, this model marginalises out all the possible segmentations, and features are extracted from the RNN trained together with the segmental CRF. Essentially, this model is self-contained and can be trained end-to-end. In this paper, we discuss practical training and decoding issues as well as the method to speed up the training in the context of speech recognition. We performed experiments on the TIMIT dataset. We achieved 17.3% phone error rate (PER) from the first-pass decoding-the best reported result using CRFs, despite the fact that we only used a zeroth-order CRF and without using any language model.
Persistent Identifierhttp://hdl.handle.net/10722/296137
ISSN
2020 SCImago Journal Rankings: 0.689
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorLu, Liang-
dc.contributor.authorKong, Lingpeng-
dc.contributor.authorDyer, Chris-
dc.contributor.authorSmith, Noah A.-
dc.contributor.authorRenals, Steve-
dc.date.accessioned2021-02-11T04:52:55Z-
dc.date.available2021-02-11T04:52:55Z-
dc.date.issued2016-
dc.identifier.citationINTERSPEECH 2016, San Francisco, CA, 8-12 September 2016. In Proceedings of the 17th Annual Conference of the International Speech Communication Association (INTERSPEECH 2016), 2016, p. 385-389-
dc.identifier.issn2308-457X-
dc.identifier.urihttp://hdl.handle.net/10722/296137-
dc.description.abstractCopyright © 2016 ISCA. We study the segmental recurrent neural network for end-to-end acoustic modelling. This model connects the segmental conditional random field (CRF) with a recurrent neural network (RNN) used for feature extraction. Compared to most previous CRF-based acoustic models, it does not rely on an external system to provide features or segmentation boundaries. Instead, this model marginalises out all the possible segmentations, and features are extracted from the RNN trained together with the segmental CRF. Essentially, this model is self-contained and can be trained end-to-end. In this paper, we discuss practical training and decoding issues as well as the method to speed up the training in the context of speech recognition. We performed experiments on the TIMIT dataset. We achieved 17.3% phone error rate (PER) from the first-pass decoding-the best reported result using CRFs, despite the fact that we only used a zeroth-order CRF and without using any language model.-
dc.languageeng-
dc.relation.ispartofProceedings of the 17th Annual Conference of the International Speech Communication Association (INTERSPEECH 2016)-
dc.subjectEnd-to-end speech recognition-
dc.subjectSegmental CRF-
dc.subjectRecurrent neural networks-
dc.titleSegmental recurrent neural networks for end-to-end speech recognition-
dc.typeConference_Paper-
dc.description.naturelink_to_OA_fulltext-
dc.identifier.doi10.21437/Interspeech.2016-40-
dc.identifier.scopuseid_2-s2.0-84994242299-
dc.identifier.spage385-
dc.identifier.epage389-
dc.identifier.eissn1990-9772-
dc.identifier.isiWOS:000409394400082-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats