File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: On the interplay between physical and content priors in deep learning for computational imaging

TitleOn the interplay between physical and content priors in deep learning for computational imaging
Authors
Issue Date2020
Citation
Optics Express, 2020, v. 28, n. 16, p. 24152-24170 How to Cite?
AbstractDeep learning (DL) has been applied extensively in many computational imaging problems, often leading to superior performance over traditional iterative approaches. However, two important questions remain largely unanswered: First, how well can the trained neural network generalize to objects very different from the ones in training? This is particularly important in practice, since large-scale annotated examples similar to those of interest are often not available during training. Second, has the trained neural network learnt the underlying (inverse) physics model, or has it merely done something trivial, such as memorizing the examples or point-wise pattern matching? This pertains to the interpretability of machine-learning based algorithms. In this work, we use the Phase Extraction Neural Network (PhENN) [Optica 4, 1117-1125 (2017)], a deep neural network (DNN) for quantitative phase retrieval in a lensless phase imaging system as the standard platform and show that the two questions are related and share a common crux: The choice of the training examples. Moreover, we connect the strength of the regularization effect imposed by a training set to the training process with the Shannon entropy of images in the dataset. That is, the higher the entropy of the training images, the weaker the regularization effect can be imposed. We also discover that weaker regularization effect leads to better learning of the underlying propagation model, i.e. the weak object transfer function, applicable for weakly scattering objects under the weak object approximation. Finally, simulation and experimental results show that better cross-domain generalization performance can be achieved if DNN is trained on a higher-entropy database, e.g. the ImageNet, than if the same DNN is trained on a lower-entropy database, e.g. MNIST, as the former allows the underlying physics model be learned better than the latter.
Persistent Identifierhttp://hdl.handle.net/10722/319053
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorDeng, Mo-
dc.contributor.authorLi, Shuai-
dc.contributor.authorZhang, Zhengyun-
dc.contributor.authorKang, Iksung-
dc.contributor.authorFang, Nicholas X.-
dc.contributor.authorBarbastathis, George-
dc.date.accessioned2022-10-11T12:25:09Z-
dc.date.available2022-10-11T12:25:09Z-
dc.date.issued2020-
dc.identifier.citationOptics Express, 2020, v. 28, n. 16, p. 24152-24170-
dc.identifier.urihttp://hdl.handle.net/10722/319053-
dc.description.abstractDeep learning (DL) has been applied extensively in many computational imaging problems, often leading to superior performance over traditional iterative approaches. However, two important questions remain largely unanswered: First, how well can the trained neural network generalize to objects very different from the ones in training? This is particularly important in practice, since large-scale annotated examples similar to those of interest are often not available during training. Second, has the trained neural network learnt the underlying (inverse) physics model, or has it merely done something trivial, such as memorizing the examples or point-wise pattern matching? This pertains to the interpretability of machine-learning based algorithms. In this work, we use the Phase Extraction Neural Network (PhENN) [Optica 4, 1117-1125 (2017)], a deep neural network (DNN) for quantitative phase retrieval in a lensless phase imaging system as the standard platform and show that the two questions are related and share a common crux: The choice of the training examples. Moreover, we connect the strength of the regularization effect imposed by a training set to the training process with the Shannon entropy of images in the dataset. That is, the higher the entropy of the training images, the weaker the regularization effect can be imposed. We also discover that weaker regularization effect leads to better learning of the underlying propagation model, i.e. the weak object transfer function, applicable for weakly scattering objects under the weak object approximation. Finally, simulation and experimental results show that better cross-domain generalization performance can be achieved if DNN is trained on a higher-entropy database, e.g. the ImageNet, than if the same DNN is trained on a lower-entropy database, e.g. MNIST, as the former allows the underlying physics model be learned better than the latter.-
dc.languageeng-
dc.relation.ispartofOptics Express-
dc.titleOn the interplay between physical and content priors in deep learning for computational imaging-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1364/OE.395204-
dc.identifier.pmid32752400-
dc.identifier.scopuseid_2-s2.0-85089132909-
dc.identifier.volume28-
dc.identifier.issue16-
dc.identifier.spage24152-
dc.identifier.epage24170-
dc.identifier.eissn1094-4087-
dc.identifier.isiWOS:000560931200093-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats