File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/CVPR.2019.00623
- WOS: WOS:000529484006029
Supplementary
-
Citations:
- Web of Science: 0
- Appears in Collections:
Conference Paper: Multi-Source Weak Supervision for Saliency Detection
Title | Multi-Source Weak Supervision for Saliency Detection |
---|---|
Authors | |
Issue Date | 2019 |
Publisher | Institute of Electrical and Electronics Engineers, Inc.. |
Citation | IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15-20 June, 2019. In Proceedings: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition: CVPR 2019, p. 6067-6076 How to Cite? |
Abstract | The high cost of pixel-level annotations makes it appealing to train saliency detection models with weak supervision. However, a single weak supervision source usually does not contain enough information to train a well-performing model. To this end, we propose a unified framework to train saliency detection models with diverse weak supervision sources. In this paper, we use category labels, captions, and unlabelled data for training, yet other supervision sources can also be plugged into this flexible framework. We design a classification network (CNet) and a caption generation network (PNet), which learn to predict object categories and generate captions, respectively, meanwhile highlight the most important regions for corresponding tasks. An attention transfer loss is designed to transmit supervision signal between networks, such that the network designed to be trained with one supervision source can benefit from another. An attention coherence loss is defined on unlabelled data to encourage the networks to detect generally salient regions instead of task-specific regions. We use CNet and PNet to generate pixel-level pseudo labels to train a saliency prediction network (SNet). During the testing phases, we only need SNet to predict saliency maps. Experiments demonstrate the performance of our method compares favourably against unsupervised and weakly supervised methods and even some supervised methods. |
Persistent Identifier | http://hdl.handle.net/10722/316365 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zeng, Y | - |
dc.contributor.author | Zhuge, Y | - |
dc.contributor.author | Lu, H | - |
dc.contributor.author | Zhang, L | - |
dc.contributor.author | Qian, M | - |
dc.contributor.author | Yu, Y | - |
dc.date.accessioned | 2022-09-02T06:10:12Z | - |
dc.date.available | 2022-09-02T06:10:12Z | - |
dc.date.issued | 2019 | - |
dc.identifier.citation | IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15-20 June, 2019. In Proceedings: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition: CVPR 2019, p. 6067-6076 | - |
dc.identifier.uri | http://hdl.handle.net/10722/316365 | - |
dc.description.abstract | The high cost of pixel-level annotations makes it appealing to train saliency detection models with weak supervision. However, a single weak supervision source usually does not contain enough information to train a well-performing model. To this end, we propose a unified framework to train saliency detection models with diverse weak supervision sources. In this paper, we use category labels, captions, and unlabelled data for training, yet other supervision sources can also be plugged into this flexible framework. We design a classification network (CNet) and a caption generation network (PNet), which learn to predict object categories and generate captions, respectively, meanwhile highlight the most important regions for corresponding tasks. An attention transfer loss is designed to transmit supervision signal between networks, such that the network designed to be trained with one supervision source can benefit from another. An attention coherence loss is defined on unlabelled data to encourage the networks to detect generally salient regions instead of task-specific regions. We use CNet and PNet to generate pixel-level pseudo labels to train a saliency prediction network (SNet). During the testing phases, we only need SNet to predict saliency maps. Experiments demonstrate the performance of our method compares favourably against unsupervised and weakly supervised methods and even some supervised methods. | - |
dc.language | eng | - |
dc.publisher | Institute of Electrical and Electronics Engineers, Inc.. | - |
dc.relation.ispartof | Proceedings: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition: CVPR 2019 | - |
dc.title | Multi-Source Weak Supervision for Saliency Detection | - |
dc.type | Conference_Paper | - |
dc.identifier.email | Yu, Y: yzyu@cs.hku.hk | - |
dc.identifier.authority | Yu, Y=rp01415 | - |
dc.identifier.doi | 10.1109/CVPR.2019.00623 | - |
dc.identifier.hkuros | 336350 | - |
dc.identifier.spage | 6067 | - |
dc.identifier.epage | 6076 | - |
dc.identifier.isi | WOS:000529484006029 | - |
dc.publisher.place | United States | - |
dc.identifier.eisbn | 9781728132938 | - |