File Download

There are no files associated with this item.

Supplementary

Conference Paper: Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers

TitleDynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers
Authors
Keywordsneural network pruning
sparse learning
network compression
architecture search
Issue Date2020
Citation
8th International Conference on Learning Representations (ICLR) 2020, Virtual Conference, Addis Ababa, Ethiopia, 27-30 April 2020, p. 1-14 How to Cite?
AbstractWe present a novel network pruning algorithm called Dynamic Sparse Training that can jointly find the optimal network parameters and sparse network structure in a unified optimization process with trainable pruning thresholds. These thresholds can have fine-grained layer-wise adjustments dynamically via backpropagation. We demonstrate that our dynamic sparse training algorithm can easily train very sparse neural network models with little performance loss using the same training epochs as dense models. Dynamic Sparse Training achieves prior art performance compared with other sparse training algorithms on various network architectures. Additionally, we have several surprising observations that provide strong evidence to the effectiveness and efficiency of our algorithm. These observations reveal the underlying problems of traditional three-stage pruning algorithms and present the potential guidance provided by our algorithm to the design of more compact network architectures.
DescriptionPoster Presentation
Persistent Identifierhttp://hdl.handle.net/10722/288231

 

DC FieldValueLanguage
dc.contributor.authorLiu, J-
dc.contributor.authorXu, Z-
dc.contributor.authorShi, R-
dc.contributor.authorCheung, RCC-
dc.contributor.authorSo, HKH-
dc.date.accessioned2020-10-05T12:09:49Z-
dc.date.available2020-10-05T12:09:49Z-
dc.date.issued2020-
dc.identifier.citation8th International Conference on Learning Representations (ICLR) 2020, Virtual Conference, Addis Ababa, Ethiopia, 27-30 April 2020, p. 1-14-
dc.identifier.urihttp://hdl.handle.net/10722/288231-
dc.descriptionPoster Presentation-
dc.description.abstractWe present a novel network pruning algorithm called Dynamic Sparse Training that can jointly find the optimal network parameters and sparse network structure in a unified optimization process with trainable pruning thresholds. These thresholds can have fine-grained layer-wise adjustments dynamically via backpropagation. We demonstrate that our dynamic sparse training algorithm can easily train very sparse neural network models with little performance loss using the same training epochs as dense models. Dynamic Sparse Training achieves prior art performance compared with other sparse training algorithms on various network architectures. Additionally, we have several surprising observations that provide strong evidence to the effectiveness and efficiency of our algorithm. These observations reveal the underlying problems of traditional three-stage pruning algorithms and present the potential guidance provided by our algorithm to the design of more compact network architectures.-
dc.languageeng-
dc.relation.ispartofInternational Conference on Learning Representations (ICLR)-
dc.subjectneural network pruning-
dc.subjectsparse learning-
dc.subjectnetwork compression-
dc.subjectarchitecture search-
dc.titleDynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers-
dc.typeConference_Paper-
dc.identifier.emailSo, HKH: hso@eee.hku.hk-
dc.identifier.authoritySo, HKH=rp00169-
dc.identifier.hkuros315349-
dc.identifier.spage1-
dc.identifier.epage14-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats