File Download

There are no files associated with this item.

Supplementary

Article: A speech-level-based segmented model to decode the dynamic auditory attention states in the competing speaker scenes

TitleA speech-level-based segmented model to decode the dynamic auditory attention states in the competing speaker scenes
Authors
Issue Date2022
Citation
Frontiers in Neuroscience, 2022 How to Cite?
Persistent Identifierhttp://hdl.handle.net/10722/319201

 

DC FieldValueLanguage
dc.contributor.authorWang, L-
dc.contributor.authorWang, Y-
dc.contributor.authorLiu, Z-
dc.contributor.authorWu, EX-
dc.contributor.authorChen, F-
dc.date.accessioned2022-10-14T05:08:58Z-
dc.date.available2022-10-14T05:08:58Z-
dc.date.issued2022-
dc.identifier.citationFrontiers in Neuroscience, 2022-
dc.identifier.urihttp://hdl.handle.net/10722/319201-
dc.languageeng-
dc.relation.ispartofFrontiers in Neuroscience-
dc.titleA speech-level-based segmented model to decode the dynamic auditory attention states in the competing speaker scenes-
dc.typeArticle-
dc.identifier.emailWu, EX: ewu@eee.hku.hk-
dc.identifier.authorityWu, EX=rp00193-
dc.identifier.hkuros338702-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats