File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Agnostic learning of a single neuron with gradient descent

TitleAgnostic learning of a single neuron with gradient descent
Authors
Issue Date2020
Citation
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Virtual Conference, 6-12 Decemeber 2020. In Advances in Neural Information Processing Systems 33 (NeurIPS 2020), 2020 How to Cite?
AbstractWe consider the problem of learning the best-fitting single neuron as measured by the expected square loss E(x,y)~D[(s(w?x) - y)2] over some unknown joint distribution D by using gradient descent to minimize the empirical risk induced by a set of i.i.d. samples S ~ Dn. The activation function s is an arbitrary Lipschitz and non-decreasing function, making the optimization problem nonconvex and nonsmooth in general, and covers typical neural network activation functions and inverse link functions in the generalized linear model setting. In the agnostic PAC learning setting, where no assumption on the relationship between the labels y and the input x is made, if the optimal population risk is OPT, we show that gradient descent achieves population risk O(OPT) + e in polynomial time and sample complexity when s is strictly increasing. For the ReLU activation, our population risk guarantee is O(OPT1/2) + e. When labels take the form y = s(v?x) + ? for zero-mean sub-Gaussian noise ?, we show that the population risk guarantees for gradient descent improve to OPT + e. Our sample complexity and runtime guarantees are (almost) dimension independent, and when s is strictly increasing, require no distributional assumptions beyond boundedness. For ReLU, we show the same results under a nondegeneracy assumption for the marginal distribution of the input.
Persistent Identifierhttp://hdl.handle.net/10722/303722
ISSN
2020 SCImago Journal Rankings: 1.399

 

DC FieldValueLanguage
dc.contributor.authorFrei, Spencer-
dc.contributor.authorCao, Yuan-
dc.contributor.authorGu, Quanquan-
dc.date.accessioned2021-09-15T08:25:53Z-
dc.date.available2021-09-15T08:25:53Z-
dc.date.issued2020-
dc.identifier.citation34th Conference on Neural Information Processing Systems (NeurIPS 2020), Virtual Conference, 6-12 Decemeber 2020. In Advances in Neural Information Processing Systems 33 (NeurIPS 2020), 2020-
dc.identifier.issn1049-5258-
dc.identifier.urihttp://hdl.handle.net/10722/303722-
dc.description.abstractWe consider the problem of learning the best-fitting single neuron as measured by the expected square loss E(x,y)~D[(s(w?x) - y)2] over some unknown joint distribution D by using gradient descent to minimize the empirical risk induced by a set of i.i.d. samples S ~ Dn. The activation function s is an arbitrary Lipschitz and non-decreasing function, making the optimization problem nonconvex and nonsmooth in general, and covers typical neural network activation functions and inverse link functions in the generalized linear model setting. In the agnostic PAC learning setting, where no assumption on the relationship between the labels y and the input x is made, if the optimal population risk is OPT, we show that gradient descent achieves population risk O(OPT) + e in polynomial time and sample complexity when s is strictly increasing. For the ReLU activation, our population risk guarantee is O(OPT1/2) + e. When labels take the form y = s(v?x) + ? for zero-mean sub-Gaussian noise ?, we show that the population risk guarantees for gradient descent improve to OPT + e. Our sample complexity and runtime guarantees are (almost) dimension independent, and when s is strictly increasing, require no distributional assumptions beyond boundedness. For ReLU, we show the same results under a nondegeneracy assumption for the marginal distribution of the input.-
dc.languageeng-
dc.relation.ispartofAdvances in Neural Information Processing Systems 33 (NeurIPS 2020)-
dc.titleAgnostic learning of a single neuron with gradient descent-
dc.typeConference_Paper-
dc.description.naturelink_to_OA_fulltext-
dc.identifier.scopuseid_2-s2.0-85098436117-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats