File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Article: PAC–Bayes Guarantees for Data-Adaptive Pairwise Learning

TitlePAC–Bayes Guarantees for Data-Adaptive Pairwise Learning
Authors
Keywordsalgorithmic stability
PAC–Bayes
pairwise learning
randomized algorithms
Issue Date8-Aug-2025
PublisherMDPI
Citation
Entropy, 2025, v. 27, n. 8 How to Cite?
Abstract

We study the generalization properties of stochastic optimization methods under adaptive data sampling schemes, focusing on the setting of pairwise learning, which is central to tasks like ranking, metric learning, and AUC maximization. Unlike pointwise learning, pairwise methods must address statistical dependencies between input pairs—a challenge that existing analyses do not adequately handle when sampling is adaptive. In this work, we extend a general framework that integrates two algorithm-dependent approaches—algorithmic stability and PAC–Bayes analysis for this purpose. Specifically, we examine (1) Pairwise Stochastic Gradient Descent (Pairwise SGD), widely used across machine learning applications, and (2) Pairwise Stochastic Gradient Descent Ascent (Pairwise SGDA), common in adversarial training. Our analysis avoids artificial randomization and leverages the inherent stochasticity of gradient updates instead. Our results yield generalization guarantees of order (Formula presented.) under non-uniform adaptive sampling strategies, covering both smooth and non-smooth convex settings. We believe these findings address a significant gap in the theory of pairwise learning with adaptive sampling.


Persistent Identifierhttp://hdl.handle.net/10722/360508

 

DC FieldValueLanguage
dc.contributor.authorZhou, Sijia-
dc.contributor.authorLei, Yunwen-
dc.contributor.authorKabán, Ata-
dc.date.accessioned2025-09-11T00:30:51Z-
dc.date.available2025-09-11T00:30:51Z-
dc.date.issued2025-08-08-
dc.identifier.citationEntropy, 2025, v. 27, n. 8-
dc.identifier.urihttp://hdl.handle.net/10722/360508-
dc.description.abstract<p>We study the generalization properties of stochastic optimization methods under adaptive data sampling schemes, focusing on the setting of pairwise learning, which is central to tasks like ranking, metric learning, and AUC maximization. Unlike pointwise learning, pairwise methods must address statistical dependencies between input pairs—a challenge that existing analyses do not adequately handle when sampling is adaptive. In this work, we extend a general framework that integrates two algorithm-dependent approaches—algorithmic stability and PAC–Bayes analysis for this purpose. Specifically, we examine (1) Pairwise Stochastic Gradient Descent (Pairwise SGD), widely used across machine learning applications, and (2) Pairwise Stochastic Gradient Descent Ascent (Pairwise SGDA), common in adversarial training. Our analysis avoids artificial randomization and leverages the inherent stochasticity of gradient updates instead. Our results yield generalization guarantees of order (Formula presented.) under non-uniform adaptive sampling strategies, covering both smooth and non-smooth convex settings. We believe these findings address a significant gap in the theory of pairwise learning with adaptive sampling.</p>-
dc.languageeng-
dc.publisherMDPI-
dc.relation.ispartofEntropy-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectalgorithmic stability-
dc.subjectPAC–Bayes-
dc.subjectpairwise learning-
dc.subjectrandomized algorithms-
dc.titlePAC–Bayes Guarantees for Data-Adaptive Pairwise Learning -
dc.typeArticle-
dc.description.naturepublished_or_final_version-
dc.identifier.doi10.3390/e27080845-
dc.identifier.scopuseid_2-s2.0-105014278874-
dc.identifier.volume27-
dc.identifier.issue8-
dc.identifier.eissn1099-4300-
dc.identifier.issnl1099-4300-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats