File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: DeshadowNet: A multi-context embedding deep network for shadow removal

TitleDeshadowNet: A multi-context embedding deep network for shadow removal
Authors
Issue Date2017
Citation
Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 2017, v. 2017-January, p. 2308-2316 How to Cite?
AbstractShadow removal is a challenging task as it requires the detection/annotation of shadows as well as semantic understanding of the scene. In this paper, we propose an automatic and end-to-end deep neural network (DeshadowNet) to tackle these problems in a unified manner. DeshadowNet is designed with a multi-context architecture, where the output shadow matte is predicted by embedding information from three different perspectives. The first global network extracts shadow features from a global view. Two levels of features are derived from the global network and transferred to two parallel networks. While one extracts the appearance of the input image, the other one involves semantic understanding for final prediction. These two complementary networks generate multi-context features to obtain the shadow matte with fine local details. To evaluate the performance of the proposed method, we construct the first large scale benchmark with 3088 image pairs. Extensive experiments on two publicly available benchmarks and our large-scale benchmark show that the proposed method performs favorably against several state-of-the-art methods.
Persistent Identifierhttp://hdl.handle.net/10722/325382
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorQu, Liangqiong-
dc.contributor.authorTian, Jiandong-
dc.contributor.authorHe, Shengfeng-
dc.contributor.authorTang, Yandong-
dc.contributor.authorLau, Rynson W.H.-
dc.date.accessioned2023-02-27T07:32:24Z-
dc.date.available2023-02-27T07:32:24Z-
dc.date.issued2017-
dc.identifier.citationProceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 2017, v. 2017-January, p. 2308-2316-
dc.identifier.urihttp://hdl.handle.net/10722/325382-
dc.description.abstractShadow removal is a challenging task as it requires the detection/annotation of shadows as well as semantic understanding of the scene. In this paper, we propose an automatic and end-to-end deep neural network (DeshadowNet) to tackle these problems in a unified manner. DeshadowNet is designed with a multi-context architecture, where the output shadow matte is predicted by embedding information from three different perspectives. The first global network extracts shadow features from a global view. Two levels of features are derived from the global network and transferred to two parallel networks. While one extracts the appearance of the input image, the other one involves semantic understanding for final prediction. These two complementary networks generate multi-context features to obtain the shadow matte with fine local details. To evaluate the performance of the proposed method, we construct the first large scale benchmark with 3088 image pairs. Extensive experiments on two publicly available benchmarks and our large-scale benchmark show that the proposed method performs favorably against several state-of-the-art methods.-
dc.languageeng-
dc.relation.ispartofProceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017-
dc.titleDeshadowNet: A multi-context embedding deep network for shadow removal-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/CVPR.2017.248-
dc.identifier.scopuseid_2-s2.0-85044275240-
dc.identifier.volume2017-January-
dc.identifier.spage2308-
dc.identifier.epage2316-
dc.identifier.isiWOS:000418371402040-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats