File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Improving mood classification in music digital libraries by combining lyrics and audio

TitleImproving mood classification in music digital libraries by combining lyrics and audio
Authors
KeywordsExperimentation
Measurement
Performance
Access points
Classification accuracy
Data sets
Fusion methods
Learning curves
Music digital libraries
Online music
Sentiment analysis
Text feature
Training sample
Audio acoustics
Audio systems
Experiments
Metadata
Text processing
Digital libraries
Issue Date2010
Citation
The 10th Annual Joint Conference on Digital Libraries (JCDL2010), Gold Coast, Australia, 21-25 June 2010. In Proceedings of the ACM International Conference on Digital Libraries, 2010, p. 159-168 How to Cite?
AbstractMood is an emerging metadata type and access point in music digital libraries (MDL) and online music repositories. In this study, we present a comprehensive investigation of the usefulness of lyrics in music mood classification by evaluating and comparing a wide range of lyric text features including linguistic and text stylistic features. We then combine the best lyric features with features extracted from music audio using two fusion methods. The results show that combining lyrics and audio significantly outperformed systems using audio-only features. In addition, the examination of learning curves shows that the hybrid lyric + audio system needed fewer training samples to achieve the same or better classification accuracies than systems using lyrics or audio singularly. These experiments were conducted on a unique large-scale dataset of 5,296 songs (with both audio and lyrics for each) representing 18 mood categories derived from social tags. The findings push forward the state-of-the-art on lyric sentiment analysis and automatic music mood classification and will help make mood a practical access point in music digital libraries. © 2010 ACM.
Persistent Identifierhttp://hdl.handle.net/10722/180711
ISBN

 

DC FieldValueLanguage
dc.contributor.authorHu, Xen_US
dc.contributor.authorDownie, JSen_US
dc.date.accessioned2013-01-28T01:41:32Z-
dc.date.available2013-01-28T01:41:32Z-
dc.date.issued2010en_US
dc.identifier.citationThe 10th Annual Joint Conference on Digital Libraries (JCDL2010), Gold Coast, Australia, 21-25 June 2010. In Proceedings of the ACM International Conference on Digital Libraries, 2010, p. 159-168en_US
dc.identifier.isbn9781450300858en_US
dc.identifier.urihttp://hdl.handle.net/10722/180711-
dc.description.abstractMood is an emerging metadata type and access point in music digital libraries (MDL) and online music repositories. In this study, we present a comprehensive investigation of the usefulness of lyrics in music mood classification by evaluating and comparing a wide range of lyric text features including linguistic and text stylistic features. We then combine the best lyric features with features extracted from music audio using two fusion methods. The results show that combining lyrics and audio significantly outperformed systems using audio-only features. In addition, the examination of learning curves shows that the hybrid lyric + audio system needed fewer training samples to achieve the same or better classification accuracies than systems using lyrics or audio singularly. These experiments were conducted on a unique large-scale dataset of 5,296 songs (with both audio and lyrics for each) representing 18 mood categories derived from social tags. The findings push forward the state-of-the-art on lyric sentiment analysis and automatic music mood classification and will help make mood a practical access point in music digital libraries. © 2010 ACM.en_US
dc.languageengen_US
dc.relation.ispartofProceedings of the ACM International Conference on Digital Libraries-
dc.subjectExperimentationen_US
dc.subjectMeasurementen_US
dc.subjectPerformanceen_US
dc.subjectAccess pointsen_US
dc.subjectClassification accuracyen_US
dc.subjectData setsen_US
dc.subjectFusion methodsen_US
dc.subjectLearning curvesen_US
dc.subjectMusic digital librariesen_US
dc.subjectOnline musicen_US
dc.subjectSentiment analysisen_US
dc.subjectText featureen_US
dc.subjectTraining sampleen_US
dc.subjectAudio acousticsen_US
dc.subjectAudio systemsen_US
dc.subjectExperimentsen_US
dc.subjectMetadataen_US
dc.subjectText processingen_US
dc.subjectDigital librariesen_US
dc.titleImproving mood classification in music digital libraries by combining lyrics and audioen_US
dc.typeConference_Paperen_US
dc.identifier.emailHu, X: xiaoxhu@hku.hken_US
dc.identifier.authorityHu, X=rp01711en_US
dc.description.naturelink_to_subscribed_fulltexten_US
dc.identifier.doi10.1145/1816123.1816146en_US
dc.identifier.scopuseid_2-s2.0-77955115533-
dc.identifier.spage159en_US
dc.identifier.epage168en_US
dc.customcontrol.immutablesml 160129 - amend-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats