File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Learning a Reinforced Agent for Flexible Exposure Bracketing Selection

TitleLearning a Reinforced Agent for Flexible Exposure Bracketing Selection
Authors
KeywordsSemantics
Dynamic range
Feature extraction
Learning (artificial intelligence)
Cameras
Issue Date2020
PublisherIEEE Computer Society. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1000147
Citation
Proceedings of IEEE/CVF International Conference on Computer Vision and Pattern Recognition (CVPR 2020), Seattle, USA, 14-19 June 2020, p. 1817-1825 How to Cite?
AbstractAutomatically selecting exposure bracketing (images exposed differently) is important to obtain a high dynamic range image by using multi-exposure fusion. Unlike previous methods that have many restrictions such as requiring camera response function, sensor noise model, and a stream of preview images with different exposures (not accessible in some scenarios e.g. mobile applications), we propose a novel deep neural network to automatically select exposure bracketing, named EBSNet, which is sufficiently flexible without having the above restrictions. EBSNet is formulated as a reinforced agent that is trained by maximizing rewards provided by a multi-exposure fusion network (MEFNet). By utilizing the illumination and semantic information extracted from just a single auto-exposure preview image, EBSNet enables to select an optimal exposure bracketing for multi-exposure fusion. EBSNet and MEFNet can be jointly trained to produce favorable results against recent state-of-the-art approaches. To facilitate future research, we provide a new benchmark dataset for multi-exposure selection and fusion.
DescriptionSession: Poster 1.2 — 3D From Multiview and Sensors; Computational Photography; Efficient Training and Inference Methods for Networks - Poster no. 57 ; Paper ID 1804
CVPR 2020 held virtually due to COVID-19
Persistent Identifierhttp://hdl.handle.net/10722/284165
ISSN
2023 SCImago Journal Rankings: 10.331
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorWang, Z-
dc.contributor.authorZhang, J-
dc.contributor.authorLin, M-
dc.contributor.authorWang, J-
dc.contributor.authorLuo, P-
dc.contributor.authorRen, J-
dc.date.accessioned2020-07-20T05:56:36Z-
dc.date.available2020-07-20T05:56:36Z-
dc.date.issued2020-
dc.identifier.citationProceedings of IEEE/CVF International Conference on Computer Vision and Pattern Recognition (CVPR 2020), Seattle, USA, 14-19 June 2020, p. 1817-1825-
dc.identifier.issn1063-6919-
dc.identifier.urihttp://hdl.handle.net/10722/284165-
dc.descriptionSession: Poster 1.2 — 3D From Multiview and Sensors; Computational Photography; Efficient Training and Inference Methods for Networks - Poster no. 57 ; Paper ID 1804-
dc.descriptionCVPR 2020 held virtually due to COVID-19-
dc.description.abstractAutomatically selecting exposure bracketing (images exposed differently) is important to obtain a high dynamic range image by using multi-exposure fusion. Unlike previous methods that have many restrictions such as requiring camera response function, sensor noise model, and a stream of preview images with different exposures (not accessible in some scenarios e.g. mobile applications), we propose a novel deep neural network to automatically select exposure bracketing, named EBSNet, which is sufficiently flexible without having the above restrictions. EBSNet is formulated as a reinforced agent that is trained by maximizing rewards provided by a multi-exposure fusion network (MEFNet). By utilizing the illumination and semantic information extracted from just a single auto-exposure preview image, EBSNet enables to select an optimal exposure bracketing for multi-exposure fusion. EBSNet and MEFNet can be jointly trained to produce favorable results against recent state-of-the-art approaches. To facilitate future research, we provide a new benchmark dataset for multi-exposure selection and fusion.-
dc.languageeng-
dc.publisherIEEE Computer Society. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1000147-
dc.relation.ispartofIEEE Conference on Computer Vision and Pattern Recognition. Proceedings-
dc.rightsIEEE Conference on Computer Vision and Pattern Recognition. Proceedings. Copyright © IEEE Computer Society.-
dc.rights©2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.-
dc.subjectSemantics-
dc.subjectDynamic range-
dc.subjectFeature extraction-
dc.subjectLearning (artificial intelligence)-
dc.subjectCameras-
dc.titleLearning a Reinforced Agent for Flexible Exposure Bracketing Selection-
dc.typeConference_Paper-
dc.identifier.emailLuo, P: pluo@hku.hk-
dc.identifier.authorityLuo, P=rp02575-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/CVPR42600.2020.00189-
dc.identifier.scopuseid_2-s2.0-85094634358-
dc.identifier.hkuros311026-
dc.identifier.spage1817-
dc.identifier.epage1825-
dc.identifier.isiWOS:000620679502008-
dc.publisher.placeUnited States-
dc.identifier.issnl1063-6919-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats