File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Automatic Assessment of Chinese Dysarthria Using Audio-visual Vowel Graph Attention Network

TitleAutomatic Assessment of Chinese Dysarthria Using Audio-visual Vowel Graph Attention Network
Authors
Issue Date18-Mar-2025
Citation
IEEE Transactions on Audio, Speech and Language Processing, 2025, v. 33 How to Cite?
Abstract

Automatic assessment of dysarthria remains a highly challenging task due to the high heterogeneity in acoustic signals and the limited data. Currently, research on the automatic assessment of dysarthria primarily focuses on two approaches: one that utilizes expert features combined with machine learning, and the other that employs data-driven deep learning methods to extract representations. Studies have shown that expert features can effectively account for the heterogeneity of dysarthria but may lack comprehensiveness. In contrast, deep learning methods excel at uncovering latent features. Therefore, integrating the advantages of expert knowledge and deep learning to construct a neural network architecture based on expert knowledge may be beneficial for interpretability and assessment performance. In this context, the present paper proposes a vowel graph attention network based on audio-visual information, which effectively integrates the strengths of expert knowledge and deep learning. Firstly, the VGAN (Vowel Graph Attention Network) structure based on vowel space theory was designed, which has two branches to mine the information in features and the spatial correlation between vowels respectively. Secondly, a feature set based on expert knowledge and deep representation is designed. Finally, visual information was incorporated into the model to further enhance its robustness and generalizability. Tested on the Mandarin Subacute Stroke Dysarthria Multimodal (MSDM) Database, this method exhibited superior performance in regression experiments targeting Frenchay scores compared to existing approaches.


Persistent Identifierhttp://hdl.handle.net/10722/359159

 

DC FieldValueLanguage
dc.contributor.authorLiu, Xiaokang-
dc.contributor.authorDu, Xiaoxia-
dc.contributor.authorLiu, Juan-
dc.contributor.authorSu, Rongfeng-
dc.contributor.authorNg, Manwa Lawrence-
dc.contributor.authorZhang, Yumei-
dc.contributor.authorYang, Yudong-
dc.contributor.authorZhao, Shaofeng-
dc.contributor.authorWang, Lan-
dc.contributor.authorYan, Nan-
dc.date.accessioned2025-08-22T00:30:39Z-
dc.date.available2025-08-22T00:30:39Z-
dc.date.issued2025-03-18-
dc.identifier.citationIEEE Transactions on Audio, Speech and Language Processing, 2025, v. 33-
dc.identifier.urihttp://hdl.handle.net/10722/359159-
dc.description.abstract<p>Automatic assessment of dysarthria remains a highly challenging task due to the high heterogeneity in acoustic signals and the limited data. Currently, research on the automatic assessment of dysarthria primarily focuses on two approaches: one that utilizes expert features combined with machine learning, and the other that employs data-driven deep learning methods to extract representations. Studies have shown that expert features can effectively account for the heterogeneity of dysarthria but may lack comprehensiveness. In contrast, deep learning methods excel at uncovering latent features. Therefore, integrating the advantages of expert knowledge and deep learning to construct a neural network architecture based on expert knowledge may be beneficial for interpretability and assessment performance. In this context, the present paper proposes a vowel graph attention network based on audio-visual information, which effectively integrates the strengths of expert knowledge and deep learning. Firstly, the VGAN (Vowel Graph Attention Network) structure based on vowel space theory was designed, which has two branches to mine the information in features and the spatial correlation between vowels respectively. Secondly, a feature set based on expert knowledge and deep representation is designed. Finally, visual information was incorporated into the model to further enhance its robustness and generalizability. Tested on the Mandarin Subacute Stroke Dysarthria Multimodal (MSDM) Database, this method exhibited superior performance in regression experiments targeting Frenchay scores compared to existing approaches.<br></p>-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Audio, Speech and Language Processing-
dc.titleAutomatic Assessment of Chinese Dysarthria Using Audio-visual Vowel Graph Attention Network-
dc.typeArticle-
dc.identifier.doi10.1109/TASLPRO.2025.3546562-
dc.identifier.volume33-
dc.identifier.eissn2998-4173-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats