File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Stability and Differential Privacy of Stochastic Gradient Descent for Pairwise Learning with Non-Smooth Loss

TitleStability and Differential Privacy of Stochastic Gradient Descent for Pairwise Learning with Non-Smooth Loss
Authors
Issue Date2021
Citation
Proceedings of Machine Learning Research, 2021, v. 130, p. 2026-2034 How to Cite?
AbstractPairwise learning has recently received increasing attention since it subsumes many important machine learning tasks (e.g. AUC maximization and metric learning) into a unifying framework. In this paper, we give the first-ever-known stability and generalization analysis of stochastic gradient descent (SGD) for pairwise learning with non-smooth loss functions, which are widely used (e.g. Ranking SVM with the hinge loss). We introduce a novel decomposition in its stability analysis to decouple the pairwisely dependent random variables, and derive generalization bounds which are consistent with the setting of pointwise learning. Furthermore, we apply our stability analysis to develop differentially private SGD for pairwise learning, for which our utility bounds match with the state-of-the-art output perturbation method (Huai et al., 2020) with smooth losses. Finally, we illustrate the results using specific examples of AUC maximization and similarity metric learning. As a byproduct, we provide an affirmative solution to an open question on the advantage of the nuclear-norm constraint over the Frobenius-norm constraint in similarity metric learning.
Persistent Identifierhttp://hdl.handle.net/10722/329721

 

DC FieldValueLanguage
dc.contributor.authorYang, Zhenhuan-
dc.contributor.authorLei, Yunwen-
dc.contributor.authorLyu, Siwei-
dc.contributor.authorYing, Yiming-
dc.date.accessioned2023-08-09T03:34:51Z-
dc.date.available2023-08-09T03:34:51Z-
dc.date.issued2021-
dc.identifier.citationProceedings of Machine Learning Research, 2021, v. 130, p. 2026-2034-
dc.identifier.urihttp://hdl.handle.net/10722/329721-
dc.description.abstractPairwise learning has recently received increasing attention since it subsumes many important machine learning tasks (e.g. AUC maximization and metric learning) into a unifying framework. In this paper, we give the first-ever-known stability and generalization analysis of stochastic gradient descent (SGD) for pairwise learning with non-smooth loss functions, which are widely used (e.g. Ranking SVM with the hinge loss). We introduce a novel decomposition in its stability analysis to decouple the pairwisely dependent random variables, and derive generalization bounds which are consistent with the setting of pointwise learning. Furthermore, we apply our stability analysis to develop differentially private SGD for pairwise learning, for which our utility bounds match with the state-of-the-art output perturbation method (Huai et al., 2020) with smooth losses. Finally, we illustrate the results using specific examples of AUC maximization and similarity metric learning. As a byproduct, we provide an affirmative solution to an open question on the advantage of the nuclear-norm constraint over the Frobenius-norm constraint in similarity metric learning.-
dc.languageeng-
dc.relation.ispartofProceedings of Machine Learning Research-
dc.titleStability and Differential Privacy of Stochastic Gradient Descent for Pairwise Learning with Non-Smooth Loss-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85108534530-
dc.identifier.volume130-
dc.identifier.spage2026-
dc.identifier.epage2034-
dc.identifier.eissn2640-3498-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats