File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Borrowing Treasures from the Wealthy: Deep Transfer Learning through Selective Joint Fine-tuning

TitleBorrowing Treasures from the Wealthy: Deep Transfer Learning through Selective Joint Fine-tuning
Authors
Issue Date2017
PublisherIEEE Computer Society. The Proceedings is located at http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1000147
Citation
Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, Hawaii, 21-26 July 2017, p. 10-19 How to Cite?
AbstractDeep neural networks require a large amount of labeled training data during supervised learning. However, collecting and labeling so much data might be infeasible in many cases. In this paper, we introduce a deep transfer learning scheme, called selective joint fine-tuning, for improving the performance of deep learning tasks with insufficient training data. In this scheme, a target learning task with insufficient training data is carried out simultaneously with another source learning task with abundant training data. However, the source learning task does not use all existing training data. Our core idea is to identify and use a subset of training images from the original source learning task whose low-level characteristics are similar to those from the target learning task, and jointly fine-tune shared convolutional layers for both tasks. Specifically, we compute descriptors from linear or nonlinear filter bank responses on training images from both tasks, and use such descriptors to search for a desired subset of training samples for the source learning task. Experiments demonstrate that our deep transfer learning scheme achieves state-of-the-art performance on multiple visual classification tasks with insufficient training data for deep learning. Such tasks include Caltech 256, MIT Indoor 67, and fine-grained classification problems (Oxford Flowers 102 and Stanford Dogs 120). In comparison to fine-tuning without a source domain, the proposed method can improve the classification accuracy by 2% - 10% using a single model. Codes and models are available at https://github.com/ZYYSzj/Selective-Joint-Fine-tuning.
Persistent Identifierhttp://hdl.handle.net/10722/243235
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorGe, W-
dc.contributor.authorYu, Y-
dc.date.accessioned2017-08-25T02:52:01Z-
dc.date.available2017-08-25T02:52:01Z-
dc.date.issued2017-
dc.identifier.citationProceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, Hawaii, 21-26 July 2017, p. 10-19-
dc.identifier.urihttp://hdl.handle.net/10722/243235-
dc.description.abstractDeep neural networks require a large amount of labeled training data during supervised learning. However, collecting and labeling so much data might be infeasible in many cases. In this paper, we introduce a deep transfer learning scheme, called selective joint fine-tuning, for improving the performance of deep learning tasks with insufficient training data. In this scheme, a target learning task with insufficient training data is carried out simultaneously with another source learning task with abundant training data. However, the source learning task does not use all existing training data. Our core idea is to identify and use a subset of training images from the original source learning task whose low-level characteristics are similar to those from the target learning task, and jointly fine-tune shared convolutional layers for both tasks. Specifically, we compute descriptors from linear or nonlinear filter bank responses on training images from both tasks, and use such descriptors to search for a desired subset of training samples for the source learning task. Experiments demonstrate that our deep transfer learning scheme achieves state-of-the-art performance on multiple visual classification tasks with insufficient training data for deep learning. Such tasks include Caltech 256, MIT Indoor 67, and fine-grained classification problems (Oxford Flowers 102 and Stanford Dogs 120). In comparison to fine-tuning without a source domain, the proposed method can improve the classification accuracy by 2% - 10% using a single model. Codes and models are available at https://github.com/ZYYSzj/Selective-Joint-Fine-tuning.-
dc.languageeng-
dc.publisherIEEE Computer Society. The Proceedings is located at http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1000147-
dc.relation.ispartofIEEE Conference on Computer Vision and Pattern Recognition-
dc.titleBorrowing Treasures from the Wealthy: Deep Transfer Learning through Selective Joint Fine-tuning-
dc.typeConference_Paper-
dc.identifier.emailYu, Y: yzyu@cs.hku.hk-
dc.identifier.authorityYu, Y=rp01415-
dc.description.naturelink_to_OA_fulltext-
dc.identifier.doi10.1109/CVPR.2017.9-
dc.identifier.scopuseid_2-s2.0-85044453361-
dc.identifier.hkuros273678-
dc.identifier.spage10-
dc.identifier.epage19-
dc.identifier.isiWOS:000418371400002-
dc.publisher.placeUnited States-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats