File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Functional regularization for representation learning: A unified theoretical perspective

TitleFunctional regularization for representation learning: A unified theoretical perspective
Authors
Issue Date2020
Citation
Advances in Neural Information Processing Systems, 2020, v. 2020-December How to Cite?
AbstractUnsupervised and self-supervised learning approaches have become a crucial tool to learn representations for downstream prediction tasks. While these approaches are widely used in practice and achieve impressive empirical gains, their theoretical understanding largely lags behind. Towards bridging this gap, we present a unifying perspective where several such approaches can be viewed as imposing a regularization on the representation via a learnable function using unlabeled data. We propose a discriminative theoretical framework for analyzing the sample complexity of these approaches, which generalizes the framework of [3] to allow learnable regularization functions. Our sample complexity bounds show that, with carefully chosen hypothesis classes to exploit the structure in the data, these learnable regularization functions can prune the hypothesis space, and help reduce the amount of labeled data needed. We then provide two concrete examples of functional regularization, one using auto-encoders and the other using masked self-supervision, and apply our framework to quantify the reduction in the sample complexity bound of labeled data. We also provide complementary empirical results to support our analysis.
Persistent Identifierhttp://hdl.handle.net/10722/341316
ISSN
2020 SCImago Journal Rankings: 1.399

 

DC FieldValueLanguage
dc.contributor.authorGarg, Siddhant-
dc.contributor.authorLiang, Yingyu-
dc.date.accessioned2024-03-13T08:41:51Z-
dc.date.available2024-03-13T08:41:51Z-
dc.date.issued2020-
dc.identifier.citationAdvances in Neural Information Processing Systems, 2020, v. 2020-December-
dc.identifier.issn1049-5258-
dc.identifier.urihttp://hdl.handle.net/10722/341316-
dc.description.abstractUnsupervised and self-supervised learning approaches have become a crucial tool to learn representations for downstream prediction tasks. While these approaches are widely used in practice and achieve impressive empirical gains, their theoretical understanding largely lags behind. Towards bridging this gap, we present a unifying perspective where several such approaches can be viewed as imposing a regularization on the representation via a learnable function using unlabeled data. We propose a discriminative theoretical framework for analyzing the sample complexity of these approaches, which generalizes the framework of [3] to allow learnable regularization functions. Our sample complexity bounds show that, with carefully chosen hypothesis classes to exploit the structure in the data, these learnable regularization functions can prune the hypothesis space, and help reduce the amount of labeled data needed. We then provide two concrete examples of functional regularization, one using auto-encoders and the other using masked self-supervision, and apply our framework to quantify the reduction in the sample complexity bound of labeled data. We also provide complementary empirical results to support our analysis.-
dc.languageeng-
dc.relation.ispartofAdvances in Neural Information Processing Systems-
dc.titleFunctional regularization for representation learning: A unified theoretical perspective-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85108401917-
dc.identifier.volume2020-December-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats