File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Denoising Noisy Neural Networks: A Bayesian Approach With Compensation

TitleDenoising Noisy Neural Networks: A Bayesian Approach With Compensation
Authors
Keywordsdenoiser
federated edge learning
Noisy neural network
wireless transmission of neural networks
Issue Date2023
Citation
IEEE Transactions on Signal Processing, 2023, v. 71, p. 2460-2474 How to Cite?
AbstractDeep neural networks (DNNs) with noisy weights, which we refer to as noisy neural networks (NoisyNNs), arise from the training and inference of DNNs in the presence of noise. NoisyNNs emerge in many new applications, including the wireless transmission of DNNs, the efficient deployment or storage of DNNs in analog devices, and the truncation or quantization of DNN weights. This article studies a fundamental problem of NoisyNNs: how to reconstruct the DNN weights from their noisy manifestations. While prior works relied exclusively on the maximum likelihood (ML) estimation, this article puts forth a denoising approach to reconstruct DNNs with the aim of maximizing the inference accuracy of the reconstructed models. The superiority of our denoiser is rigorously proven in two small-scale problems, wherein we consider a quadratic neural network function and a shallow feedforward neural network, respectively. When applied to advanced learning tasks with modern DNN architectures, our denoiser exhibits significantly better performance than the ML estimator. Consider the average test accuracy of the denoised DNN model versus the weight variance to noise power ratio (WNR) performance. When denoising a noisy ResNet34 model arising from noisy inference, our denoiser outperforms ML estimation by up to 4.1 dB to achieve a test accuracy of 60%. When denoising a noisy ResNet18 model arising from noisy training, our denoiser outperforms ML estimation by 13.4 dB and 8.3 dB to achieve test accuracies of 60% and 80%, respectively.
Persistent Identifierhttp://hdl.handle.net/10722/363550
ISSN
2023 Impact Factor: 4.6
2023 SCImago Journal Rankings: 2.520

 

DC FieldValueLanguage
dc.contributor.authorShao, Yulin-
dc.contributor.authorLiew, Soung Chang-
dc.contributor.authorGunduz, Deniz-
dc.date.accessioned2025-10-10T07:47:42Z-
dc.date.available2025-10-10T07:47:42Z-
dc.date.issued2023-
dc.identifier.citationIEEE Transactions on Signal Processing, 2023, v. 71, p. 2460-2474-
dc.identifier.issn1053-587X-
dc.identifier.urihttp://hdl.handle.net/10722/363550-
dc.description.abstractDeep neural networks (DNNs) with noisy weights, which we refer to as noisy neural networks (NoisyNNs), arise from the training and inference of DNNs in the presence of noise. NoisyNNs emerge in many new applications, including the wireless transmission of DNNs, the efficient deployment or storage of DNNs in analog devices, and the truncation or quantization of DNN weights. This article studies a fundamental problem of NoisyNNs: how to reconstruct the DNN weights from their noisy manifestations. While prior works relied exclusively on the maximum likelihood (ML) estimation, this article puts forth a denoising approach to reconstruct DNNs with the aim of maximizing the inference accuracy of the reconstructed models. The superiority of our denoiser is rigorously proven in two small-scale problems, wherein we consider a quadratic neural network function and a shallow feedforward neural network, respectively. When applied to advanced learning tasks with modern DNN architectures, our denoiser exhibits significantly better performance than the ML estimator. Consider the average test accuracy of the denoised DNN model versus the weight variance to noise power ratio (WNR) performance. When denoising a noisy ResNet34 model arising from noisy inference, our denoiser outperforms ML estimation by up to 4.1 dB to achieve a test accuracy of 60%. When denoising a noisy ResNet18 model arising from noisy training, our denoiser outperforms ML estimation by 13.4 dB and 8.3 dB to achieve test accuracies of 60% and 80%, respectively.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Signal Processing-
dc.subjectdenoiser-
dc.subjectfederated edge learning-
dc.subjectNoisy neural network-
dc.subjectwireless transmission of neural networks-
dc.titleDenoising Noisy Neural Networks: A Bayesian Approach With Compensation-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TSP.2023.3290327-
dc.identifier.scopuseid_2-s2.0-85163532232-
dc.identifier.volume71-
dc.identifier.spage2460-
dc.identifier.epage2474-
dc.identifier.eissn1941-0476-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats