File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TSP.2023.3290327
- Scopus: eid_2-s2.0-85163532232
- Find via

Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: Denoising Noisy Neural Networks: A Bayesian Approach With Compensation
| Title | Denoising Noisy Neural Networks: A Bayesian Approach With Compensation |
|---|---|
| Authors | |
| Keywords | denoiser federated edge learning Noisy neural network wireless transmission of neural networks |
| Issue Date | 2023 |
| Citation | IEEE Transactions on Signal Processing, 2023, v. 71, p. 2460-2474 How to Cite? |
| Abstract | Deep neural networks (DNNs) with noisy weights, which we refer to as noisy neural networks (NoisyNNs), arise from the training and inference of DNNs in the presence of noise. NoisyNNs emerge in many new applications, including the wireless transmission of DNNs, the efficient deployment or storage of DNNs in analog devices, and the truncation or quantization of DNN weights. This article studies a fundamental problem of NoisyNNs: how to reconstruct the DNN weights from their noisy manifestations. While prior works relied exclusively on the maximum likelihood (ML) estimation, this article puts forth a denoising approach to reconstruct DNNs with the aim of maximizing the inference accuracy of the reconstructed models. The superiority of our denoiser is rigorously proven in two small-scale problems, wherein we consider a quadratic neural network function and a shallow feedforward neural network, respectively. When applied to advanced learning tasks with modern DNN architectures, our denoiser exhibits significantly better performance than the ML estimator. Consider the average test accuracy of the denoised DNN model versus the weight variance to noise power ratio (WNR) performance. When denoising a noisy ResNet34 model arising from noisy inference, our denoiser outperforms ML estimation by up to 4.1 dB to achieve a test accuracy of 60%. When denoising a noisy ResNet18 model arising from noisy training, our denoiser outperforms ML estimation by 13.4 dB and 8.3 dB to achieve test accuracies of 60% and 80%, respectively. |
| Persistent Identifier | http://hdl.handle.net/10722/363550 |
| ISSN | 2023 Impact Factor: 4.6 2023 SCImago Journal Rankings: 2.520 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Shao, Yulin | - |
| dc.contributor.author | Liew, Soung Chang | - |
| dc.contributor.author | Gunduz, Deniz | - |
| dc.date.accessioned | 2025-10-10T07:47:42Z | - |
| dc.date.available | 2025-10-10T07:47:42Z | - |
| dc.date.issued | 2023 | - |
| dc.identifier.citation | IEEE Transactions on Signal Processing, 2023, v. 71, p. 2460-2474 | - |
| dc.identifier.issn | 1053-587X | - |
| dc.identifier.uri | http://hdl.handle.net/10722/363550 | - |
| dc.description.abstract | Deep neural networks (DNNs) with noisy weights, which we refer to as noisy neural networks (NoisyNNs), arise from the training and inference of DNNs in the presence of noise. NoisyNNs emerge in many new applications, including the wireless transmission of DNNs, the efficient deployment or storage of DNNs in analog devices, and the truncation or quantization of DNN weights. This article studies a fundamental problem of NoisyNNs: how to reconstruct the DNN weights from their noisy manifestations. While prior works relied exclusively on the maximum likelihood (ML) estimation, this article puts forth a denoising approach to reconstruct DNNs with the aim of maximizing the inference accuracy of the reconstructed models. The superiority of our denoiser is rigorously proven in two small-scale problems, wherein we consider a quadratic neural network function and a shallow feedforward neural network, respectively. When applied to advanced learning tasks with modern DNN architectures, our denoiser exhibits significantly better performance than the ML estimator. Consider the average test accuracy of the denoised DNN model versus the weight variance to noise power ratio (WNR) performance. When denoising a noisy ResNet34 model arising from noisy inference, our denoiser outperforms ML estimation by up to 4.1 dB to achieve a test accuracy of 60%. When denoising a noisy ResNet18 model arising from noisy training, our denoiser outperforms ML estimation by 13.4 dB and 8.3 dB to achieve test accuracies of 60% and 80%, respectively. | - |
| dc.language | eng | - |
| dc.relation.ispartof | IEEE Transactions on Signal Processing | - |
| dc.subject | denoiser | - |
| dc.subject | federated edge learning | - |
| dc.subject | Noisy neural network | - |
| dc.subject | wireless transmission of neural networks | - |
| dc.title | Denoising Noisy Neural Networks: A Bayesian Approach With Compensation | - |
| dc.type | Article | - |
| dc.description.nature | link_to_subscribed_fulltext | - |
| dc.identifier.doi | 10.1109/TSP.2023.3290327 | - |
| dc.identifier.scopus | eid_2-s2.0-85163532232 | - |
| dc.identifier.volume | 71 | - |
| dc.identifier.spage | 2460 | - |
| dc.identifier.epage | 2474 | - |
| dc.identifier.eissn | 1941-0476 | - |
