File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Implicit Regularization Paths of Weighted Neural Representations

TitleImplicit Regularization Paths of Weighted Neural Representations
Authors
Issue Date2024
Citation
Advances in Neural Information Processing Systems, 2024, v. 37 How to Cite?
AbstractWe study the implicit regularization effects induced by (observation) weighting of pretrained features. For weight and feature matrices of bounded operator norms that are infinitesimally free with respect to (normalized) trace functionals, we derive equivalence paths connecting different weighting matrices and ridge regularization levels. Specifically, we show that ridge estimators trained on weighted features along the same path are asymptotically equivalent when evaluated against test vectors of bounded norms. These paths can be interpreted as matching the effective degrees of freedom of ridge estimators fitted with weighted features. For the special case of subsampling without replacement, our results apply to independently sampled random features and kernel features and confirm recent conjectures (Conjectures 7 and 8) of the authors on the existence of such paths in [50]. We also present an additive risk decomposition for ensembles of weighted estimators and show that the risks are equivalent along the paths when the ensemble size goes to infinity. As a practical consequence of the path equivalences, we develop an efficient cross-validation method for tuning and apply it to subsampled pretrained representations across several models (e.g., ResNet-50) and datasets (e.g., CIFAR-100).
Persistent Identifierhttp://hdl.handle.net/10722/365429
ISSN
2020 SCImago Journal Rankings: 1.399

 

DC FieldValueLanguage
dc.contributor.authorDu, Jin Hong-
dc.contributor.authorPatil, Pratik-
dc.date.accessioned2025-11-05T09:40:24Z-
dc.date.available2025-11-05T09:40:24Z-
dc.date.issued2024-
dc.identifier.citationAdvances in Neural Information Processing Systems, 2024, v. 37-
dc.identifier.issn1049-5258-
dc.identifier.urihttp://hdl.handle.net/10722/365429-
dc.description.abstractWe study the implicit regularization effects induced by (observation) weighting of pretrained features. For weight and feature matrices of bounded operator norms that are infinitesimally free with respect to (normalized) trace functionals, we derive equivalence paths connecting different weighting matrices and ridge regularization levels. Specifically, we show that ridge estimators trained on weighted features along the same path are asymptotically equivalent when evaluated against test vectors of bounded norms. These paths can be interpreted as matching the effective degrees of freedom of ridge estimators fitted with weighted features. For the special case of subsampling without replacement, our results apply to independently sampled random features and kernel features and confirm recent conjectures (Conjectures 7 and 8) of the authors on the existence of such paths in [50]. We also present an additive risk decomposition for ensembles of weighted estimators and show that the risks are equivalent along the paths when the ensemble size goes to infinity. As a practical consequence of the path equivalences, we develop an efficient cross-validation method for tuning and apply it to subsampled pretrained representations across several models (e.g., ResNet-50) and datasets (e.g., CIFAR-100).-
dc.languageeng-
dc.relation.ispartofAdvances in Neural Information Processing Systems-
dc.titleImplicit Regularization Paths of Weighted Neural Representations-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-105000502753-
dc.identifier.volume37-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats