File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Learning social relation traits from face images

TitleLearning social relation traits from face images
Authors
Issue Date2015
Citation
Proceedings of the IEEE International Conference on Computer Vision, 2015, v. 2015 International Conference on Computer Vision, ICCV 2015, p. 3631-3639 How to Cite?
Abstract© 2015 IEEE. Social relation defines the association, e.g., warm, friendliness, and dominance, between two or more people. Motivated by psychological studies, we investigate if such fine grained and high-level relation traits can be characterised and quantified from face images in the wild. To address this challenging problem we propose a deep model that learns a rich face representation to capture gender, expression, head pose, and age-related attributes, and then performs pairwise-face reasoning for relation prediction. To learn from heterogeneous attribute sources, we formulate a new network architecture with a bridging layer to leverage the inherent correspondences among these datasets. It can also cope with missing target attribute labels. Extensive experiments show that our approach is effective for fine-grained social relation learning in images and videos.
Persistent Identifierhttp://hdl.handle.net/10722/273565
ISSN
2023 SCImago Journal Rankings: 12.263
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorZhang, Zhanpeng-
dc.contributor.authorLuo, Ping-
dc.contributor.authorLoy, Chen Change-
dc.contributor.authorTang, Xiaoou-
dc.date.accessioned2019-08-12T09:55:57Z-
dc.date.available2019-08-12T09:55:57Z-
dc.date.issued2015-
dc.identifier.citationProceedings of the IEEE International Conference on Computer Vision, 2015, v. 2015 International Conference on Computer Vision, ICCV 2015, p. 3631-3639-
dc.identifier.issn1550-5499-
dc.identifier.urihttp://hdl.handle.net/10722/273565-
dc.description.abstract© 2015 IEEE. Social relation defines the association, e.g., warm, friendliness, and dominance, between two or more people. Motivated by psychological studies, we investigate if such fine grained and high-level relation traits can be characterised and quantified from face images in the wild. To address this challenging problem we propose a deep model that learns a rich face representation to capture gender, expression, head pose, and age-related attributes, and then performs pairwise-face reasoning for relation prediction. To learn from heterogeneous attribute sources, we formulate a new network architecture with a bridging layer to leverage the inherent correspondences among these datasets. It can also cope with missing target attribute labels. Extensive experiments show that our approach is effective for fine-grained social relation learning in images and videos.-
dc.languageeng-
dc.relation.ispartofProceedings of the IEEE International Conference on Computer Vision-
dc.titleLearning social relation traits from face images-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/ICCV.2015.414-
dc.identifier.scopuseid_2-s2.0-84973860996-
dc.identifier.volume2015 International Conference on Computer Vision, ICCV 2015-
dc.identifier.spage3631-
dc.identifier.epage3639-
dc.identifier.isiWOS:000380414100406-
dc.identifier.issnl1550-5499-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats