File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Face model compression by distilling knowledge from neurons

TitleFace model compression by distilling knowledge from neurons
Authors
Issue Date2016
PublisherAssociation for the Advancement of Artificial Intelligence. The conference proceedings' web site is located at https://www.aaai.org/ocs/index.php/AAAI/AAAI16/index
Citation
30th AAAI Conference on Artificial Intelligence, AAAI 2016, 2016, p. 3560-3566 How to Cite?
Abstract© Copyright 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. The recent advanced face recognition systems were built on large Deep Neural Networks (DNNs) or their ensembles, which have millions of parameters. However, the expensive computation of DNNs make their deployment difficult on mobile and embedded devices. This work addresses model compression for face recognition, where the learned knowledge of a large teacher network or its ensemble is utilized as supervision to train a compact student network. Unlike previous works that represent the knowledge by the soften label probabilities, which are difficult to fit, we represent the knowledge by using the neurons at the higher hidden layer, which preserve as much information as the label probabilities, but are more compact. By leveraging the essential characteristics (domain knowledge) of the learned face representation, a neuron selection method is proposed to choose neurons that are most relevant to face recognition. Using the selected neurons as supervision to mimic the single networks of DeepID2+ and DeepID3, which are the state-of-The-Art face recognition systems, a compact student with simple network structure achieves better verification accuracy on LFW than its teachers, respectively. When using an ensemble of DeepID2+ as teacher, a mimicked student is able to outperform it and achieves 51.6 × compression ratio and 90 × speed-up in inference, making this cumbersome model applicable on portable devices.
Persistent Identifierhttp://hdl.handle.net/10722/273581

 

DC FieldValueLanguage
dc.contributor.authorLuo, Ping-
dc.contributor.authorZhu, Zhenyao-
dc.contributor.authorLiu, Ziwei-
dc.contributor.authorWang, Xiaogang-
dc.contributor.authorTang, Xiaoou-
dc.date.accessioned2019-08-12T09:56:00Z-
dc.date.available2019-08-12T09:56:00Z-
dc.date.issued2016-
dc.identifier.citation30th AAAI Conference on Artificial Intelligence, AAAI 2016, 2016, p. 3560-3566-
dc.identifier.urihttp://hdl.handle.net/10722/273581-
dc.description.abstract© Copyright 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. The recent advanced face recognition systems were built on large Deep Neural Networks (DNNs) or their ensembles, which have millions of parameters. However, the expensive computation of DNNs make their deployment difficult on mobile and embedded devices. This work addresses model compression for face recognition, where the learned knowledge of a large teacher network or its ensemble is utilized as supervision to train a compact student network. Unlike previous works that represent the knowledge by the soften label probabilities, which are difficult to fit, we represent the knowledge by using the neurons at the higher hidden layer, which preserve as much information as the label probabilities, but are more compact. By leveraging the essential characteristics (domain knowledge) of the learned face representation, a neuron selection method is proposed to choose neurons that are most relevant to face recognition. Using the selected neurons as supervision to mimic the single networks of DeepID2+ and DeepID3, which are the state-of-The-Art face recognition systems, a compact student with simple network structure achieves better verification accuracy on LFW than its teachers, respectively. When using an ensemble of DeepID2+ as teacher, a mimicked student is able to outperform it and achieves 51.6 × compression ratio and 90 × speed-up in inference, making this cumbersome model applicable on portable devices.-
dc.languageeng-
dc.publisherAssociation for the Advancement of Artificial Intelligence. The conference proceedings' web site is located at https://www.aaai.org/ocs/index.php/AAAI/AAAI16/index-
dc.relation.ispartof30th AAAI Conference on Artificial Intelligence, AAAI 2016-
dc.titleFace model compression by distilling knowledge from neurons-
dc.typeConference_Paper-
dc.description.naturelink_to_OA_fulltext-
dc.identifier.scopuseid_2-s2.0-85007190552-
dc.identifier.spage3560-
dc.identifier.epage3566-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats