File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1364/OE.395204
- Scopus: eid_2-s2.0-85089132909
- PMID: 32752400
- WOS: WOS:000560931200093
Supplementary
- Citations:
- Appears in Collections:
Article: On the interplay between physical and content priors in deep learning for computational imaging
Title | On the interplay between physical and content priors in deep learning for computational imaging |
---|---|
Authors | |
Issue Date | 2020 |
Citation | Optics Express, 2020, v. 28, n. 16, p. 24152-24170 How to Cite? |
Abstract | Deep learning (DL) has been applied extensively in many computational imaging problems, often leading to superior performance over traditional iterative approaches. However, two important questions remain largely unanswered: First, how well can the trained neural network generalize to objects very different from the ones in training? This is particularly important in practice, since large-scale annotated examples similar to those of interest are often not available during training. Second, has the trained neural network learnt the underlying (inverse) physics model, or has it merely done something trivial, such as memorizing the examples or point-wise pattern matching? This pertains to the interpretability of machine-learning based algorithms. In this work, we use the Phase Extraction Neural Network (PhENN) [Optica 4, 1117-1125 (2017)], a deep neural network (DNN) for quantitative phase retrieval in a lensless phase imaging system as the standard platform and show that the two questions are related and share a common crux: The choice of the training examples. Moreover, we connect the strength of the regularization effect imposed by a training set to the training process with the Shannon entropy of images in the dataset. That is, the higher the entropy of the training images, the weaker the regularization effect can be imposed. We also discover that weaker regularization effect leads to better learning of the underlying propagation model, i.e. the weak object transfer function, applicable for weakly scattering objects under the weak object approximation. Finally, simulation and experimental results show that better cross-domain generalization performance can be achieved if DNN is trained on a higher-entropy database, e.g. the ImageNet, than if the same DNN is trained on a lower-entropy database, e.g. MNIST, as the former allows the underlying physics model be learned better than the latter. |
Persistent Identifier | http://hdl.handle.net/10722/319053 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Deng, Mo | - |
dc.contributor.author | Li, Shuai | - |
dc.contributor.author | Zhang, Zhengyun | - |
dc.contributor.author | Kang, Iksung | - |
dc.contributor.author | Fang, Nicholas X. | - |
dc.contributor.author | Barbastathis, George | - |
dc.date.accessioned | 2022-10-11T12:25:09Z | - |
dc.date.available | 2022-10-11T12:25:09Z | - |
dc.date.issued | 2020 | - |
dc.identifier.citation | Optics Express, 2020, v. 28, n. 16, p. 24152-24170 | - |
dc.identifier.uri | http://hdl.handle.net/10722/319053 | - |
dc.description.abstract | Deep learning (DL) has been applied extensively in many computational imaging problems, often leading to superior performance over traditional iterative approaches. However, two important questions remain largely unanswered: First, how well can the trained neural network generalize to objects very different from the ones in training? This is particularly important in practice, since large-scale annotated examples similar to those of interest are often not available during training. Second, has the trained neural network learnt the underlying (inverse) physics model, or has it merely done something trivial, such as memorizing the examples or point-wise pattern matching? This pertains to the interpretability of machine-learning based algorithms. In this work, we use the Phase Extraction Neural Network (PhENN) [Optica 4, 1117-1125 (2017)], a deep neural network (DNN) for quantitative phase retrieval in a lensless phase imaging system as the standard platform and show that the two questions are related and share a common crux: The choice of the training examples. Moreover, we connect the strength of the regularization effect imposed by a training set to the training process with the Shannon entropy of images in the dataset. That is, the higher the entropy of the training images, the weaker the regularization effect can be imposed. We also discover that weaker regularization effect leads to better learning of the underlying propagation model, i.e. the weak object transfer function, applicable for weakly scattering objects under the weak object approximation. Finally, simulation and experimental results show that better cross-domain generalization performance can be achieved if DNN is trained on a higher-entropy database, e.g. the ImageNet, than if the same DNN is trained on a lower-entropy database, e.g. MNIST, as the former allows the underlying physics model be learned better than the latter. | - |
dc.language | eng | - |
dc.relation.ispartof | Optics Express | - |
dc.title | On the interplay between physical and content priors in deep learning for computational imaging | - |
dc.type | Article | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1364/OE.395204 | - |
dc.identifier.pmid | 32752400 | - |
dc.identifier.scopus | eid_2-s2.0-85089132909 | - |
dc.identifier.volume | 28 | - |
dc.identifier.issue | 16 | - |
dc.identifier.spage | 24152 | - |
dc.identifier.epage | 24170 | - |
dc.identifier.eissn | 1094-4087 | - |
dc.identifier.isi | WOS:000560931200093 | - |