File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Dependency exploitation: A unified CNN-RNN approach for visual emotion recognition

TitleDependency exploitation: A unified CNN-RNN approach for visual emotion recognition
Authors
Issue Date2017
Citation
IJCAI International Joint Conference on Artificial Intelligence, 2017, v. 0, p. 3595-3601 How to Cite?
AbstractVisual emotion recognition aims to associate images with appropriate emotions. There are different visual stimuli that can affect human emotion from low-level to high-level, such as color, texture, part, object, etc. However, most existing methods treat different levels of features as independent entity without having effective method for feature fusion. In this paper, we propose a unified CNN-RNN model to predict the emotion based on the fused features from different levels by exploiting the dependency among them. Our proposed architecture leverages convolutional neural network (CNN) with multiple layers to extract different levels of features within a multi-task learning framework, in which two related loss functions are introduced to learn the feature representation. Considering the dependencies within the low-level and high-level features, a bidirectional recurrent neural network (RNN) is proposed to integrate the learned features from different layers in the CNN model. Extensive experiments on both Internet images and art photo datasets demonstrate that our method outperforms the state-of-the-art methods with at least 7% performance improvement.
Persistent Identifierhttp://hdl.handle.net/10722/321758
ISSN
2020 SCImago Journal Rankings: 0.649

 

DC FieldValueLanguage
dc.contributor.authorZhu, Xinge-
dc.contributor.authorLi, Liang-
dc.contributor.authorZhang, Weigang-
dc.contributor.authorRao, Tianrong-
dc.contributor.authorXu, Min-
dc.contributor.authorHuang, Qingming-
dc.contributor.authorXu, Dong-
dc.date.accessioned2022-11-03T02:21:15Z-
dc.date.available2022-11-03T02:21:15Z-
dc.date.issued2017-
dc.identifier.citationIJCAI International Joint Conference on Artificial Intelligence, 2017, v. 0, p. 3595-3601-
dc.identifier.issn1045-0823-
dc.identifier.urihttp://hdl.handle.net/10722/321758-
dc.description.abstractVisual emotion recognition aims to associate images with appropriate emotions. There are different visual stimuli that can affect human emotion from low-level to high-level, such as color, texture, part, object, etc. However, most existing methods treat different levels of features as independent entity without having effective method for feature fusion. In this paper, we propose a unified CNN-RNN model to predict the emotion based on the fused features from different levels by exploiting the dependency among them. Our proposed architecture leverages convolutional neural network (CNN) with multiple layers to extract different levels of features within a multi-task learning framework, in which two related loss functions are introduced to learn the feature representation. Considering the dependencies within the low-level and high-level features, a bidirectional recurrent neural network (RNN) is proposed to integrate the learned features from different layers in the CNN model. Extensive experiments on both Internet images and art photo datasets demonstrate that our method outperforms the state-of-the-art methods with at least 7% performance improvement.-
dc.languageeng-
dc.relation.ispartofIJCAI International Joint Conference on Artificial Intelligence-
dc.titleDependency exploitation: A unified CNN-RNN approach for visual emotion recognition-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.24963/ijcai.2017/503-
dc.identifier.scopuseid_2-s2.0-85031910934-
dc.identifier.volume0-
dc.identifier.spage3595-
dc.identifier.epage3601-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats