File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: ReduNet: A White-box Deep Network from the Principle of Maximizing Rate Reduction

TitleReduNet: A White-box Deep Network from the Principle of Maximizing Rate Reduction
Authors
Keywordslinear discriminative representation
multi-channel convolution
rate reduction
sparsity and invariance trade-off
white-box deep network
Issue Date2022
Citation
Journal of Machine Learning Research, 2022, v. 23 How to Cite?
AbstractThis work attempts to provide a plausible theoretical framework that aims to interpret modern deep (convolutional) networks from the principles of data compression and discriminative representation. We argue that for high-dimensional multi-class data, the optimal linear discriminative representation maximizes the coding rate difference between the whole dataset and the average of all the subsets. We show that the basic iterative gradient ascent scheme for optimizing the rate reduction objective naturally leads to a multi-layer deep network, named ReduNet, which shares common characteristics of modern deep networks. The deep layered architectures, linear and nonlinear operators, and even parameters of the network are all explicitly constructed layer-by-layer via forward propagation, although they are amenable to fine-tuning via back propagation. All components of so-obtained \white-box"network have precise optimization, statistical, and geometric interpretation. Moreover, all linear operators of the so-derived network naturally become multi-channel convolutions when we enforce classification to be rigorously shift-invariant. The derivation in the invariant setting suggests a trade-off between sparsity and invariance, and also indicates that such a deep convolution network is significantly more efficient to construct and learn in the spectral domain. Our preliminary simulations and experiments clearly verify the effectiveness of both the rate reduction objective and the associated ReduNet.
Persistent Identifierhttp://hdl.handle.net/10722/327784
ISSN
2021 Impact Factor: 5.177
2020 SCImago Journal Rankings: 1.240

 

DC FieldValueLanguage
dc.contributor.authorChan, Kwan Ho Ryan-
dc.contributor.authorYu, Yaodong-
dc.contributor.authorYou, Chong-
dc.contributor.authorQi, Haozhi-
dc.contributor.authorWright, John-
dc.contributor.authorMa, Yi-
dc.date.accessioned2023-05-08T02:26:47Z-
dc.date.available2023-05-08T02:26:47Z-
dc.date.issued2022-
dc.identifier.citationJournal of Machine Learning Research, 2022, v. 23-
dc.identifier.issn1532-4435-
dc.identifier.urihttp://hdl.handle.net/10722/327784-
dc.description.abstractThis work attempts to provide a plausible theoretical framework that aims to interpret modern deep (convolutional) networks from the principles of data compression and discriminative representation. We argue that for high-dimensional multi-class data, the optimal linear discriminative representation maximizes the coding rate difference between the whole dataset and the average of all the subsets. We show that the basic iterative gradient ascent scheme for optimizing the rate reduction objective naturally leads to a multi-layer deep network, named ReduNet, which shares common characteristics of modern deep networks. The deep layered architectures, linear and nonlinear operators, and even parameters of the network are all explicitly constructed layer-by-layer via forward propagation, although they are amenable to fine-tuning via back propagation. All components of so-obtained \white-box"network have precise optimization, statistical, and geometric interpretation. Moreover, all linear operators of the so-derived network naturally become multi-channel convolutions when we enforce classification to be rigorously shift-invariant. The derivation in the invariant setting suggests a trade-off between sparsity and invariance, and also indicates that such a deep convolution network is significantly more efficient to construct and learn in the spectral domain. Our preliminary simulations and experiments clearly verify the effectiveness of both the rate reduction objective and the associated ReduNet.-
dc.languageeng-
dc.relation.ispartofJournal of Machine Learning Research-
dc.subjectlinear discriminative representation-
dc.subjectmulti-channel convolution-
dc.subjectrate reduction-
dc.subjectsparsity and invariance trade-off-
dc.subjectwhite-box deep network-
dc.titleReduNet: A White-box Deep Network from the Principle of Maximizing Rate Reduction-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85130327907-
dc.identifier.volume23-
dc.identifier.eissn1533-7928-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats