File Download

There are no files associated with this item.

Supplementary

Conference Paper: Agnostic learning of halfspaces with gradient descent via soft margins

TitleAgnostic learning of halfspaces with gradient descent via soft margins
Authors
Issue Date2021
PublisherML Research Press.
Citation
38th International Conference on Machine Learning (ICML), 18-24 July 2021, Virtual Event. In Proceedings of the 38th International Conference on Machine Learning (ICML) 2021, 18-24 July 2021, v. 139, p. 3417-3426 How to Cite?
AbstractWe analyze the properties of gradient descent on convex surrogates for the zero-one loss for the agnostic learning of halfspaces. We show that when a quantity we refer to as the extit{soft margin} is well-behaved—a condition satisfied by log-concave isotropic distributions among others—minimizers of convex surrogates for the zero-one loss are approximate minimizers for the zero-one loss itself. As standard convex optimization arguments lead to efficient guarantees for minimizing convex surrogates of the zero-one loss, our methods allow for the first positive guarantees for the classification error of halfspaces learned by gradient descent using the binary cross-entropy or hinge loss in the presence of agnostic label noise.
Persistent Identifierhttp://hdl.handle.net/10722/314620

 

DC FieldValueLanguage
dc.contributor.authorFrei, S-
dc.contributor.authorCao, Y-
dc.contributor.authorGu, Q-
dc.date.accessioned2022-07-22T05:28:03Z-
dc.date.available2022-07-22T05:28:03Z-
dc.date.issued2021-
dc.identifier.citation38th International Conference on Machine Learning (ICML), 18-24 July 2021, Virtual Event. In Proceedings of the 38th International Conference on Machine Learning (ICML) 2021, 18-24 July 2021, v. 139, p. 3417-3426-
dc.identifier.urihttp://hdl.handle.net/10722/314620-
dc.description.abstractWe analyze the properties of gradient descent on convex surrogates for the zero-one loss for the agnostic learning of halfspaces. We show that when a quantity we refer to as the extit{soft margin} is well-behaved—a condition satisfied by log-concave isotropic distributions among others—minimizers of convex surrogates for the zero-one loss are approximate minimizers for the zero-one loss itself. As standard convex optimization arguments lead to efficient guarantees for minimizing convex surrogates of the zero-one loss, our methods allow for the first positive guarantees for the classification error of halfspaces learned by gradient descent using the binary cross-entropy or hinge loss in the presence of agnostic label noise.-
dc.languageeng-
dc.publisherML Research Press.-
dc.relation.ispartofProceedings of the 38th International Conference on Machine Learning (ICML) 2021, 18-24 July 2021-
dc.titleAgnostic learning of halfspaces with gradient descent via soft margins-
dc.typeConference_Paper-
dc.identifier.emailCao, Y: yuancao@hku.hk-
dc.identifier.authorityCao, Y=rp02862-
dc.identifier.hkuros334659-
dc.identifier.volume139-
dc.identifier.spage3417-
dc.identifier.epage3426-
dc.publisher.placeAustria-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats