File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: A Novel Method of Emotion Classification and Reconstruction Using VGGNet and StarGAN for Mixed-Reality Interactions in HKU Campusland Metaverse

TitleA Novel Method of Emotion Classification and Reconstruction Using VGGNet and StarGAN for Mixed-Reality Interactions in HKU Campusland Metaverse
Authors
Issue Date30-Sep-2025
PublisherIOS Press
Citation
Frontiers in Artificial Intelligence and Applications, 2025, v. 412, p. 399-415 How to Cite?
Abstract

Verbal, written, visual, and nonverbal communication are the four main forms of communication. Nonverbal communication, including facial expressions and body gestures, is an effective method for better understanding emotion in communication. With the recent growth of the metaverse, more people and companies started using the metaverse for their business activities. However, the current communication in the metaverse is mainly in written format. This means that the metaverse environment is unable to provide human-like interactions, preventing participants from joining the metaverse and having real-world social and business activities. In order to make human interactions in the metaverse more real, this research aims to develop a novel method for nonverbal communication in the metaverse. Therefore, we reviewed current methods of emotion classification and regeneration and proposed a novel method of emotion reconstruction in the metaverse, which includes the processes of facial emotion detection and reconstruction. We used the FER2013 dataset to train our improved model of VGG19-CNNs with a residual masking network and data augmentation for facial emotion classification. We compared our novel method with a baseline machine learning model, the support vector machine. Our novel model improved the accuracy from 50.2% of the baseline model to 70.8% of the VGG19 model and 88.33% of VGGNet with a residual masking network and data augmentation. Finally, we used the StarGAN model to reconstruct the detected emotion onto the avatar’s face as a case study in HKU Campusland.


Persistent Identifierhttp://hdl.handle.net/10722/366027
ISSN
2023 SCImago Journal Rankings: 0.281

 

DC FieldValueLanguage
dc.contributor.authorLau, Adela S.M.-
dc.contributor.authorLuan, Jianduo-
dc.contributor.authorCheung, Liege-
dc.contributor.authorMa, Patrick-
dc.contributor.authorLee, Herbert-
dc.date.accessioned2025-11-14T02:41:02Z-
dc.date.available2025-11-14T02:41:02Z-
dc.date.issued2025-09-30-
dc.identifier.citationFrontiers in Artificial Intelligence and Applications, 2025, v. 412, p. 399-415-
dc.identifier.issn0922-6389-
dc.identifier.urihttp://hdl.handle.net/10722/366027-
dc.description.abstract<p>Verbal, written, visual, and nonverbal communication are the four main forms of communication. Nonverbal communication, including facial expressions and body gestures, is an effective method for better understanding emotion in communication. With the recent growth of the metaverse, more people and companies started using the metaverse for their business activities. However, the current communication in the metaverse is mainly in written format. This means that the metaverse environment is unable to provide human-like interactions, preventing participants from joining the metaverse and having real-world social and business activities. In order to make human interactions in the metaverse more real, this research aims to develop a novel method for nonverbal communication in the metaverse. Therefore, we reviewed current methods of emotion classification and regeneration and proposed a novel method of emotion reconstruction in the metaverse, which includes the processes of facial emotion detection and reconstruction. We used the FER2013 dataset to train our improved model of VGG19-CNNs with a residual masking network and data augmentation for facial emotion classification. We compared our novel method with a baseline machine learning model, the support vector machine. Our novel model improved the accuracy from 50.2% of the baseline model to 70.8% of the VGG19 model and 88.33% of VGGNet with a residual masking network and data augmentation. Finally, we used the StarGAN model to reconstruct the detected emotion onto the avatar’s face as a case study in HKU Campusland.</p>-
dc.languageeng-
dc.publisherIOS Press-
dc.relation.ispartofFrontiers in Artificial Intelligence and Applications-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.titleA Novel Method of Emotion Classification and Reconstruction Using VGGNet and StarGAN for Mixed-Reality Interactions in HKU Campusland Metaverse-
dc.typeArticle-
dc.identifier.doi10.3233/FAIA250738-
dc.identifier.volume412-
dc.identifier.spage399-
dc.identifier.epage415-
dc.identifier.eissn1535-6698-
dc.identifier.issnl0922-6389-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats