File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Approximation bounds for norm constrained neural networks with applications to regression and GANs

TitleApproximation bounds for norm constrained neural networks with applications to regression and GANs
Authors
KeywordsApproximation theory
Deep learning
GAN
Neural network
Issue Date2023
Citation
Applied and Computational Harmonic Analysis, 2023, v. 65, p. 249-278 How to Cite?
AbstractThis paper studies the approximation capacity of ReLU neural networks with norm constraint on the weights. We prove upper and lower bounds on the approximation error of these networks for smooth function classes. The lower bound is derived through the Rademacher complexity of neural networks, which may be of independent interest. We apply these approximation bounds to analyze the convergences of regression using norm constrained neural networks and distribution estimation by GANs. In particular, we obtain convergence rates for over-parameterized neural networks. It is also shown that GANs can achieve optimal rate of learning probability distributions, when the discriminator is a properly chosen norm constrained neural network.
Persistent Identifierhttp://hdl.handle.net/10722/363525
ISSN
2023 Impact Factor: 2.6
2023 SCImago Journal Rankings: 2.231

 

DC FieldValueLanguage
dc.contributor.authorJiao, Yuling-
dc.contributor.authorWang, Yang-
dc.contributor.authorYang, Yunfei-
dc.date.accessioned2025-10-10T07:47:33Z-
dc.date.available2025-10-10T07:47:33Z-
dc.date.issued2023-
dc.identifier.citationApplied and Computational Harmonic Analysis, 2023, v. 65, p. 249-278-
dc.identifier.issn1063-5203-
dc.identifier.urihttp://hdl.handle.net/10722/363525-
dc.description.abstractThis paper studies the approximation capacity of ReLU neural networks with norm constraint on the weights. We prove upper and lower bounds on the approximation error of these networks for smooth function classes. The lower bound is derived through the Rademacher complexity of neural networks, which may be of independent interest. We apply these approximation bounds to analyze the convergences of regression using norm constrained neural networks and distribution estimation by GANs. In particular, we obtain convergence rates for over-parameterized neural networks. It is also shown that GANs can achieve optimal rate of learning probability distributions, when the discriminator is a properly chosen norm constrained neural network.-
dc.languageeng-
dc.relation.ispartofApplied and Computational Harmonic Analysis-
dc.subjectApproximation theory-
dc.subjectDeep learning-
dc.subjectGAN-
dc.subjectNeural network-
dc.titleApproximation bounds for norm constrained neural networks with applications to regression and GANs-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1016/j.acha.2023.03.004-
dc.identifier.scopuseid_2-s2.0-85151018864-
dc.identifier.volume65-
dc.identifier.spage249-
dc.identifier.epage278-
dc.identifier.eissn1096-603X-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats