File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/CVPR.2017.248
- Scopus: eid_2-s2.0-85044275240
- WOS: WOS:000418371402040
Supplementary
- Citations:
- Appears in Collections:
Conference Paper: DeshadowNet: A multi-context embedding deep network for shadow removal
Title | DeshadowNet: A multi-context embedding deep network for shadow removal |
---|---|
Authors | |
Issue Date | 2017 |
Citation | Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 2017, v. 2017-January, p. 2308-2316 How to Cite? |
Abstract | Shadow removal is a challenging task as it requires the detection/annotation of shadows as well as semantic understanding of the scene. In this paper, we propose an automatic and end-to-end deep neural network (DeshadowNet) to tackle these problems in a unified manner. DeshadowNet is designed with a multi-context architecture, where the output shadow matte is predicted by embedding information from three different perspectives. The first global network extracts shadow features from a global view. Two levels of features are derived from the global network and transferred to two parallel networks. While one extracts the appearance of the input image, the other one involves semantic understanding for final prediction. These two complementary networks generate multi-context features to obtain the shadow matte with fine local details. To evaluate the performance of the proposed method, we construct the first large scale benchmark with 3088 image pairs. Extensive experiments on two publicly available benchmarks and our large-scale benchmark show that the proposed method performs favorably against several state-of-the-art methods. |
Persistent Identifier | http://hdl.handle.net/10722/325382 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Qu, Liangqiong | - |
dc.contributor.author | Tian, Jiandong | - |
dc.contributor.author | He, Shengfeng | - |
dc.contributor.author | Tang, Yandong | - |
dc.contributor.author | Lau, Rynson W.H. | - |
dc.date.accessioned | 2023-02-27T07:32:24Z | - |
dc.date.available | 2023-02-27T07:32:24Z | - |
dc.date.issued | 2017 | - |
dc.identifier.citation | Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 2017, v. 2017-January, p. 2308-2316 | - |
dc.identifier.uri | http://hdl.handle.net/10722/325382 | - |
dc.description.abstract | Shadow removal is a challenging task as it requires the detection/annotation of shadows as well as semantic understanding of the scene. In this paper, we propose an automatic and end-to-end deep neural network (DeshadowNet) to tackle these problems in a unified manner. DeshadowNet is designed with a multi-context architecture, where the output shadow matte is predicted by embedding information from three different perspectives. The first global network extracts shadow features from a global view. Two levels of features are derived from the global network and transferred to two parallel networks. While one extracts the appearance of the input image, the other one involves semantic understanding for final prediction. These two complementary networks generate multi-context features to obtain the shadow matte with fine local details. To evaluate the performance of the proposed method, we construct the first large scale benchmark with 3088 image pairs. Extensive experiments on two publicly available benchmarks and our large-scale benchmark show that the proposed method performs favorably against several state-of-the-art methods. | - |
dc.language | eng | - |
dc.relation.ispartof | Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 | - |
dc.title | DeshadowNet: A multi-context embedding deep network for shadow removal | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/CVPR.2017.248 | - |
dc.identifier.scopus | eid_2-s2.0-85044275240 | - |
dc.identifier.volume | 2017-January | - |
dc.identifier.spage | 2308 | - |
dc.identifier.epage | 2316 | - |
dc.identifier.isi | WOS:000418371402040 | - |