File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/CVPR.2017.425
- Scopus: eid_2-s2.0-85044258638
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Improving training of deep neural networks via Singular Value Bounding
Title | Improving training of deep neural networks via Singular Value Bounding |
---|---|
Authors | |
Issue Date | 2017 |
Citation | Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 2017, v. 2017-January, p. 3994-4002 How to Cite? |
Abstract | Deep learning methods achieve great success recently on many computer vision problems. In spite of these practical successes, optimization of deep networks remains an active topic in deep learning research. In this work, we focus on investigation of the network solution properties that can potentially lead to good performance. Our research is inspired by theoretical and empirical results that use orthogonal matrices to initialize networks, but we are interested in investigating how orthogonal weight matrices perform when network training converges. To this end, we propose to constrain the solutions of weight matrices in the orthogonal feasible set during the whole process of network training, and achieve this by a simple yet effective method called Singular Value Bounding (SVB). In SVB, all singular values of each weight matrix are simply bounded in a narrow band around the value of 1. Based on the same motivation, we also propose Bounded Batch Normalization (BBN), which improves Batch Normalization by removing its potential risk of ill-conditioned layer transform. We present both theoretical and empirical results to justify our proposed methods. Experiments on benchmark image classification datasets show the efficacy of our proposed SVB and BBN. In particular, we achieve the state-of-the-art results of 3.06% error rate on CIFAR10 and 16.90% on CIFAR100, using off-the-shelf network architectures (Wide ResNets). Our preliminary results on ImageNet also show the promise in large-scale learning. We release the implementation code of our methods at www.aperture-lab.net/research/svb. |
Persistent Identifier | http://hdl.handle.net/10722/345100 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Jia, Kui | - |
dc.contributor.author | Tao, Dacheng | - |
dc.contributor.author | Gao, Shenghua | - |
dc.contributor.author | Xu, Xiangmin | - |
dc.date.accessioned | 2024-08-15T09:25:14Z | - |
dc.date.available | 2024-08-15T09:25:14Z | - |
dc.date.issued | 2017 | - |
dc.identifier.citation | Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 2017, v. 2017-January, p. 3994-4002 | - |
dc.identifier.uri | http://hdl.handle.net/10722/345100 | - |
dc.description.abstract | Deep learning methods achieve great success recently on many computer vision problems. In spite of these practical successes, optimization of deep networks remains an active topic in deep learning research. In this work, we focus on investigation of the network solution properties that can potentially lead to good performance. Our research is inspired by theoretical and empirical results that use orthogonal matrices to initialize networks, but we are interested in investigating how orthogonal weight matrices perform when network training converges. To this end, we propose to constrain the solutions of weight matrices in the orthogonal feasible set during the whole process of network training, and achieve this by a simple yet effective method called Singular Value Bounding (SVB). In SVB, all singular values of each weight matrix are simply bounded in a narrow band around the value of 1. Based on the same motivation, we also propose Bounded Batch Normalization (BBN), which improves Batch Normalization by removing its potential risk of ill-conditioned layer transform. We present both theoretical and empirical results to justify our proposed methods. Experiments on benchmark image classification datasets show the efficacy of our proposed SVB and BBN. In particular, we achieve the state-of-the-art results of 3.06% error rate on CIFAR10 and 16.90% on CIFAR100, using off-the-shelf network architectures (Wide ResNets). Our preliminary results on ImageNet also show the promise in large-scale learning. We release the implementation code of our methods at www.aperture-lab.net/research/svb. | - |
dc.language | eng | - |
dc.relation.ispartof | Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 | - |
dc.title | Improving training of deep neural networks via Singular Value Bounding | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/CVPR.2017.425 | - |
dc.identifier.scopus | eid_2-s2.0-85044258638 | - |
dc.identifier.volume | 2017-January | - |
dc.identifier.spage | 3994 | - |
dc.identifier.epage | 4002 | - |