File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Learning diverse and discriminative representations via the principle of maximal coding rate reduction

TitleLearning diverse and discriminative representations via the principle of maximal coding rate reduction
Authors
Issue Date2020
Citation
Advances in Neural Information Processing Systems, 2020, v. 2020-December How to Cite?
AbstractTo learn intrinsic low-dimensional structures from high-dimensional data that most discriminate between classes, we propose the principle of Maximal Coding Rate Reduction (MCR2), an information-theoretic measure that maximizes the coding rate difference between the whole dataset and the sum of each individual class. We clarify its relationships with most existing frameworks such as cross-entropy, information bottleneck, information gain, contractive and contrastive learning, and provide theoretical guarantees for learning diverse and discriminative features. The coding rate can be accurately computed from finite samples of degenerate subspace-like distributions and can learn intrinsic representations in supervised, self-supervised, and unsupervised settings in a unified manner. Empirically, the representations learned using this principle alone are significantly more robust to label corruptions in classification than those using cross-entropy, and can lead to state-of-the-art results in clustering mixed data from self-learned invariant features.
Persistent Identifierhttp://hdl.handle.net/10722/327774
ISSN
2020 SCImago Journal Rankings: 1.399

 

DC FieldValueLanguage
dc.contributor.authorYu, Yaodong-
dc.contributor.authorChan, Kwan Ho Ryan-
dc.contributor.authorYou, Chong-
dc.contributor.authorSong, Chaobing-
dc.contributor.authorMa, Yi-
dc.date.accessioned2023-05-08T02:26:43Z-
dc.date.available2023-05-08T02:26:43Z-
dc.date.issued2020-
dc.identifier.citationAdvances in Neural Information Processing Systems, 2020, v. 2020-December-
dc.identifier.issn1049-5258-
dc.identifier.urihttp://hdl.handle.net/10722/327774-
dc.description.abstractTo learn intrinsic low-dimensional structures from high-dimensional data that most discriminate between classes, we propose the principle of Maximal Coding Rate Reduction (MCR2), an information-theoretic measure that maximizes the coding rate difference between the whole dataset and the sum of each individual class. We clarify its relationships with most existing frameworks such as cross-entropy, information bottleneck, information gain, contractive and contrastive learning, and provide theoretical guarantees for learning diverse and discriminative features. The coding rate can be accurately computed from finite samples of degenerate subspace-like distributions and can learn intrinsic representations in supervised, self-supervised, and unsupervised settings in a unified manner. Empirically, the representations learned using this principle alone are significantly more robust to label corruptions in classification than those using cross-entropy, and can lead to state-of-the-art results in clustering mixed data from self-learned invariant features.-
dc.languageeng-
dc.relation.ispartofAdvances in Neural Information Processing Systems-
dc.titleLearning diverse and discriminative representations via the principle of maximal coding rate reduction-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85108420475-
dc.identifier.volume2020-December-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats