File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/CVPR.2017.556
- Scopus: eid_2-s2.0-85041900256
- WOS: WOS:000418371405035
Supplementary
- Citations:
- Appears in Collections:
Conference Paper: Learning object interactions and descriptions for semantic image segmentation
Title | Learning object interactions and descriptions for semantic image segmentation |
---|---|
Authors | |
Issue Date | 2017 |
Citation | Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 2017, v. 2017-January, p. 5235-5243 How to Cite? |
Abstract | © 2017 IEEE. Recent advanced deep convolutional networks (CNNs) achieved great successes in many computer vision tasks, because of their compelling learning complexity and the presences of large-scale labeled data. However, as obtaining per-pixel annotations is expensive, performances of CNNs in semantic image segmentation are not fully exploited. This work significantly increases segmentation accuracy of CNNs by learning from an Image Descriptions in the Wild (IDW) dataset. Unlike previous image captioning datasets, where captions were manually and densely annotated, images and their descriptions in IDW are automatically downloaded from Internet without any manual cleaning and refinement. An IDW-CNN is proposed to jointly train IDW and existing image segmentation dataset such as Pascal VOC 2012 (VOC). It has two appealing properties. First, knowledge from different datasets can be fully explored and transferred from each other to improve performance. Second, segmentation accuracy in VOC can be constantly increased when selecting more data from IDW. Extensive experiments demonstrate the effectiveness and scalability of IDW-CNN, which outperforms existing best-performing system by 12% on VOC12 test set. |
Persistent Identifier | http://hdl.handle.net/10722/273608 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Wang, Guangrun | - |
dc.contributor.author | Luo, Ping | - |
dc.contributor.author | Lin, Liang | - |
dc.contributor.author | Wang, Xiaogang | - |
dc.date.accessioned | 2019-08-12T09:56:08Z | - |
dc.date.available | 2019-08-12T09:56:08Z | - |
dc.date.issued | 2017 | - |
dc.identifier.citation | Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 2017, v. 2017-January, p. 5235-5243 | - |
dc.identifier.uri | http://hdl.handle.net/10722/273608 | - |
dc.description.abstract | © 2017 IEEE. Recent advanced deep convolutional networks (CNNs) achieved great successes in many computer vision tasks, because of their compelling learning complexity and the presences of large-scale labeled data. However, as obtaining per-pixel annotations is expensive, performances of CNNs in semantic image segmentation are not fully exploited. This work significantly increases segmentation accuracy of CNNs by learning from an Image Descriptions in the Wild (IDW) dataset. Unlike previous image captioning datasets, where captions were manually and densely annotated, images and their descriptions in IDW are automatically downloaded from Internet without any manual cleaning and refinement. An IDW-CNN is proposed to jointly train IDW and existing image segmentation dataset such as Pascal VOC 2012 (VOC). It has two appealing properties. First, knowledge from different datasets can be fully explored and transferred from each other to improve performance. Second, segmentation accuracy in VOC can be constantly increased when selecting more data from IDW. Extensive experiments demonstrate the effectiveness and scalability of IDW-CNN, which outperforms existing best-performing system by 12% on VOC12 test set. | - |
dc.language | eng | - |
dc.relation.ispartof | Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 | - |
dc.title | Learning object interactions and descriptions for semantic image segmentation | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/CVPR.2017.556 | - |
dc.identifier.scopus | eid_2-s2.0-85041900256 | - |
dc.identifier.volume | 2017-January | - |
dc.identifier.spage | 5235 | - |
dc.identifier.epage | 5243 | - |
dc.identifier.isi | WOS:000418371405035 | - |