File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: High-dimensional Dense Residual Convolutional Neural Network for Light Field Reconstruction

TitleHigh-dimensional Dense Residual Convolutional Neural Network for Light Field Reconstruction
Authors
KeywordsLight field super-resolution
4-dimensional convolution
convolutional neural networks
deep learning
Issue Date2021
PublisherIEEE. The Journal's web site is located at http://www.computer.org/tpami
Citation
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, v. 43 n. 3, p. 873-886 How to Cite?
AbstractWe consider the problem of high-dimensional light field reconstruction and develop a learning-based framework for spatial and angular super-resolution. Many current approaches either require disparity clues or restore the spatial and angular details separately. Such methods have difficulties with non-Lambertian surfaces or occlusions. In contrast, we formulate light field super-resolution (LFSR) as tensor restoration and develop a learning framework based on a two-stage restoration with 4-dimensional (4D) convolution. This allows our model to learn the features capturing the geometry information encoded in multiple adjacent views. Such geometric features vary near the occlusion regions and indicate the foreground object border. To train a feasible network, we propose a novel normalization operation based on a group of views in the feature maps, design a stage-wise loss function, and develop the multi-range training strategy to further improve the performance. Evaluations are conducted on a number of light field datasets including real-world scenes, synthetic data, and microscope light fields. The proposed method achieves superior performance and less execution time comparing with other state-of-the-art schemes.
Persistent Identifierhttp://hdl.handle.net/10722/289264
ISSN
2021 Impact Factor: 24.314
2020 SCImago Journal Rankings: 3.811
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorMENG, N-
dc.contributor.authorSo, HKH-
dc.contributor.authorSun, X-
dc.contributor.authorLam, EYM-
dc.date.accessioned2020-10-22T08:10:12Z-
dc.date.available2020-10-22T08:10:12Z-
dc.date.issued2021-
dc.identifier.citationIEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, v. 43 n. 3, p. 873-886-
dc.identifier.issn0162-8828-
dc.identifier.urihttp://hdl.handle.net/10722/289264-
dc.description.abstractWe consider the problem of high-dimensional light field reconstruction and develop a learning-based framework for spatial and angular super-resolution. Many current approaches either require disparity clues or restore the spatial and angular details separately. Such methods have difficulties with non-Lambertian surfaces or occlusions. In contrast, we formulate light field super-resolution (LFSR) as tensor restoration and develop a learning framework based on a two-stage restoration with 4-dimensional (4D) convolution. This allows our model to learn the features capturing the geometry information encoded in multiple adjacent views. Such geometric features vary near the occlusion regions and indicate the foreground object border. To train a feasible network, we propose a novel normalization operation based on a group of views in the feature maps, design a stage-wise loss function, and develop the multi-range training strategy to further improve the performance. Evaluations are conducted on a number of light field datasets including real-world scenes, synthetic data, and microscope light fields. The proposed method achieves superior performance and less execution time comparing with other state-of-the-art schemes.-
dc.languageeng-
dc.publisherIEEE. The Journal's web site is located at http://www.computer.org/tpami-
dc.relation.ispartofIEEE Transactions on Pattern Analysis and Machine Intelligence-
dc.rightsIEEE Transactions on Pattern Analysis and Machine Intelligence. Copyright © IEEE.-
dc.rights©20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.-
dc.subjectLight field super-resolution-
dc.subject4-dimensional convolution-
dc.subjectconvolutional neural networks-
dc.subjectdeep learning-
dc.titleHigh-dimensional Dense Residual Convolutional Neural Network for Light Field Reconstruction-
dc.typeArticle-
dc.identifier.emailSo, HKH: hso@eee.hku.hk-
dc.identifier.emailLam, EYM: elam@eee.hku.hk-
dc.identifier.authoritySo, HKH=rp00169-
dc.identifier.authorityLam, EYM=rp00131-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TPAMI.2019.2945027-
dc.identifier.pmid31581075-
dc.identifier.scopuseid_2-s2.0-85100807420-
dc.identifier.hkuros316947-
dc.identifier.volume43-
dc.identifier.issue3-
dc.identifier.spage873-
dc.identifier.epage886-
dc.identifier.isiWOS:000616309900009-
dc.publisher.placeUnited States-
dc.identifier.issnl0162-8828-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats