File Download
There are no files associated with this item.
Supplementary
-
Citations:
- Appears in Collections:
Article: Automatic Assessment of Chinese Dysarthria Using Audio-visual Vowel Graph Attention Network
| Title | Automatic Assessment of Chinese Dysarthria Using Audio-visual Vowel Graph Attention Network |
|---|---|
| Authors | |
| Issue Date | 18-Mar-2025 |
| Citation | IEEE Transactions on Audio, Speech and Language Processing, 2025, v. 33 How to Cite? |
| Abstract | Automatic assessment of dysarthria remains a highly challenging task due to the high heterogeneity in acoustic signals and the limited data. Currently, research on the automatic assessment of dysarthria primarily focuses on two approaches: one that utilizes expert features combined with machine learning, and the other that employs data-driven deep learning methods to extract representations. Studies have shown that expert features can effectively account for the heterogeneity of dysarthria but may lack comprehensiveness. In contrast, deep learning methods excel at uncovering latent features. Therefore, integrating the advantages of expert knowledge and deep learning to construct a neural network architecture based on expert knowledge may be beneficial for interpretability and assessment performance. In this context, the present paper proposes a vowel graph attention network based on audio-visual information, which effectively integrates the strengths of expert knowledge and deep learning. Firstly, the VGAN (Vowel Graph Attention Network) structure based on vowel space theory was designed, which has two branches to mine the information in features and the spatial correlation between vowels respectively. Secondly, a feature set based on expert knowledge and deep representation is designed. Finally, visual information was incorporated into the model to further enhance its robustness and generalizability. Tested on the Mandarin Subacute Stroke Dysarthria Multimodal (MSDM) Database, this method exhibited superior performance in regression experiments targeting Frenchay scores compared to existing approaches. |
| Persistent Identifier | http://hdl.handle.net/10722/359159 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Liu, Xiaokang | - |
| dc.contributor.author | Du, Xiaoxia | - |
| dc.contributor.author | Liu, Juan | - |
| dc.contributor.author | Su, Rongfeng | - |
| dc.contributor.author | Ng, Manwa Lawrence | - |
| dc.contributor.author | Zhang, Yumei | - |
| dc.contributor.author | Yang, Yudong | - |
| dc.contributor.author | Zhao, Shaofeng | - |
| dc.contributor.author | Wang, Lan | - |
| dc.contributor.author | Yan, Nan | - |
| dc.date.accessioned | 2025-08-22T00:30:39Z | - |
| dc.date.available | 2025-08-22T00:30:39Z | - |
| dc.date.issued | 2025-03-18 | - |
| dc.identifier.citation | IEEE Transactions on Audio, Speech and Language Processing, 2025, v. 33 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/359159 | - |
| dc.description.abstract | <p>Automatic assessment of dysarthria remains a highly challenging task due to the high heterogeneity in acoustic signals and the limited data. Currently, research on the automatic assessment of dysarthria primarily focuses on two approaches: one that utilizes expert features combined with machine learning, and the other that employs data-driven deep learning methods to extract representations. Studies have shown that expert features can effectively account for the heterogeneity of dysarthria but may lack comprehensiveness. In contrast, deep learning methods excel at uncovering latent features. Therefore, integrating the advantages of expert knowledge and deep learning to construct a neural network architecture based on expert knowledge may be beneficial for interpretability and assessment performance. In this context, the present paper proposes a vowel graph attention network based on audio-visual information, which effectively integrates the strengths of expert knowledge and deep learning. Firstly, the VGAN (Vowel Graph Attention Network) structure based on vowel space theory was designed, which has two branches to mine the information in features and the spatial correlation between vowels respectively. Secondly, a feature set based on expert knowledge and deep representation is designed. Finally, visual information was incorporated into the model to further enhance its robustness and generalizability. Tested on the Mandarin Subacute Stroke Dysarthria Multimodal (MSDM) Database, this method exhibited superior performance in regression experiments targeting Frenchay scores compared to existing approaches.<br></p> | - |
| dc.language | eng | - |
| dc.relation.ispartof | IEEE Transactions on Audio, Speech and Language Processing | - |
| dc.title | Automatic Assessment of Chinese Dysarthria Using Audio-visual Vowel Graph Attention Network | - |
| dc.type | Article | - |
| dc.identifier.doi | 10.1109/TASLPRO.2025.3546562 | - |
| dc.identifier.volume | 33 | - |
| dc.identifier.eissn | 2998-4173 | - |
