File Download

There are no files associated with this item.

Supplementary

Conference Paper: Rethinking the pruning criteria for convolutional neural network

TitleRethinking the pruning criteria for convolutional neural network
Authors
KeywordsDeep learning
Issue Date2021
PublisherNeural Information Processing Systems Foundation.
Citation
35th Conference on Neural Information Processing Systems (NeurIPS 2021) (Virtual), Sydney, Australia, December 6-14, 2021 How to Cite?
AbstractChannel pruning is a popular technique for compressing convolutional neural networks (CNNs), where various pruning criteria have been proposed to remove the redundant filters. From our comprehensive experiments, we found two blind spots of pruning criteria: (1) Similarity: There are some strong similarities among several primary pruning criteria that are widely cited and compared. According to these criteria, the ranks of filters’ Importance Score are almost identical, resulting in similar pruned structures. (2) Applicability: The filters' Importance Score measured by some pruning criteria are too close to distinguish the network redundancy well. In this paper, we analyze the above blind spots on different types of pruning criteria with layer-wise pruning or global pruning. We also break some stereotypes, such as that the results of ℓ1 and ℓ2 pruning are not always similar. These analyses are based on the empirical experiments and our assumption (Convolutional Weight Distribution Assumption) that the well-trained convolutional filters in each layer approximately follow a Gaussian-alike distribution. This assumption has been verified through systematic and extensive statistical tests.
Persistent Identifierhttp://hdl.handle.net/10722/315618

 

DC FieldValueLanguage
dc.contributor.authorHuang, Z-
dc.contributor.authorShao, W-
dc.contributor.authorWang, X-
dc.contributor.authorLin, L-
dc.contributor.authorLuo, P-
dc.date.accessioned2022-08-19T09:01:15Z-
dc.date.available2022-08-19T09:01:15Z-
dc.date.issued2021-
dc.identifier.citation35th Conference on Neural Information Processing Systems (NeurIPS 2021) (Virtual), Sydney, Australia, December 6-14, 2021-
dc.identifier.urihttp://hdl.handle.net/10722/315618-
dc.description.abstractChannel pruning is a popular technique for compressing convolutional neural networks (CNNs), where various pruning criteria have been proposed to remove the redundant filters. From our comprehensive experiments, we found two blind spots of pruning criteria: (1) Similarity: There are some strong similarities among several primary pruning criteria that are widely cited and compared. According to these criteria, the ranks of filters’ Importance Score are almost identical, resulting in similar pruned structures. (2) Applicability: The filters' Importance Score measured by some pruning criteria are too close to distinguish the network redundancy well. In this paper, we analyze the above blind spots on different types of pruning criteria with layer-wise pruning or global pruning. We also break some stereotypes, such as that the results of ℓ1 and ℓ2 pruning are not always similar. These analyses are based on the empirical experiments and our assumption (Convolutional Weight Distribution Assumption) that the well-trained convolutional filters in each layer approximately follow a Gaussian-alike distribution. This assumption has been verified through systematic and extensive statistical tests.-
dc.languageeng-
dc.publisherNeural Information Processing Systems Foundation.-
dc.relation.ispartofAdvances In Neural Information Processing Systems: 35th conference on neural information processing systems (NeurIPS 2021)-
dc.subjectDeep learning-
dc.titleRethinking the pruning criteria for convolutional neural network-
dc.typeConference_Paper-
dc.identifier.emailLuo, P: pluo@hku.hk-
dc.identifier.authorityLuo, P=rp02575-
dc.identifier.hkuros335594-
dc.publisher.placeUnited States-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats