File Download
There are no files associated with this item.
Supplementary
-
Citations:
- Appears in Collections:
Conference Paper: Testing autotrace
Title | Testing autotrace |
---|---|
Authors | |
Issue Date | 2014 |
Publisher | Acoustical Society of America. The Journal's web site is located at http://asa.aip.org/jasa.html |
Citation | The 2014 Fall Meeting of the Acoustical Society of America, Indianapolis, IN., 27–31 October 2014. In Journal of the Acoustical Society of America, 2014, v. 136 n. 4, p. 2082 How to Cite? |
Abstract | While ultrasound provides a remarkable tool for tracking the tongue's movements during speech, it has yet to emerge as the powerful research tool it could be. A major roadblock is that the means of appropriately labeling images is a laborious, time-intensive undertaking. In earlier work, Fasel and Berry (2010) introduced a 'translational' deep belief network (tDBN) approach to automated labeling of ultrasound images of the tongue, and tested it against a single-speaker set of 3209 images. This study tests the same methodology against a much larger data set (about 40,000 images), using data collected for different studies with multiple speakers and multiple languages. Retraining a “generic” network with a small set of the most erroneously labeled images from language-specific development sets resulted in an almost three-fold increase in precision in the three test cases examined. © 2014 Acoustical Society of America |
Persistent Identifier | http://hdl.handle.net/10722/211047 |
ISSN | 2023 Impact Factor: 2.1 2023 SCImago Journal Rankings: 0.687 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Hahn-Powell, GV | - |
dc.contributor.author | Archangeli, D | - |
dc.date.accessioned | 2015-07-03T08:55:52Z | - |
dc.date.available | 2015-07-03T08:55:52Z | - |
dc.date.issued | 2014 | - |
dc.identifier.citation | The 2014 Fall Meeting of the Acoustical Society of America, Indianapolis, IN., 27–31 October 2014. In Journal of the Acoustical Society of America, 2014, v. 136 n. 4, p. 2082 | - |
dc.identifier.issn | 0001-4966 | - |
dc.identifier.uri | http://hdl.handle.net/10722/211047 | - |
dc.description.abstract | While ultrasound provides a remarkable tool for tracking the tongue's movements during speech, it has yet to emerge as the powerful research tool it could be. A major roadblock is that the means of appropriately labeling images is a laborious, time-intensive undertaking. In earlier work, Fasel and Berry (2010) introduced a 'translational' deep belief network (tDBN) approach to automated labeling of ultrasound images of the tongue, and tested it against a single-speaker set of 3209 images. This study tests the same methodology against a much larger data set (about 40,000 images), using data collected for different studies with multiple speakers and multiple languages. Retraining a “generic” network with a small set of the most erroneously labeled images from language-specific development sets resulted in an almost three-fold increase in precision in the three test cases examined. © 2014 Acoustical Society of America | - |
dc.language | eng | - |
dc.publisher | Acoustical Society of America. The Journal's web site is located at http://asa.aip.org/jasa.html | - |
dc.relation.ispartof | Journal of the Acoustical Society of America | - |
dc.title | Testing autotrace | - |
dc.type | Conference_Paper | - |
dc.identifier.email | Archangeli, D: darchang@hku.hk | - |
dc.identifier.authority | Archangeli, D=rp01748 | - |
dc.description.nature | link_to_OA_fulltext | - |
dc.identifier.doi | 10.1121/1.4899478 | - |
dc.identifier.hkuros | 244581 | - |
dc.identifier.volume | 136 | - |
dc.identifier.issue | 4 | - |
dc.identifier.spage | 2082 | - |
dc.identifier.epage | 2082 | - |
dc.publisher.place | United States | - |
dc.identifier.issnl | 0001-4966 | - |