File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Learning object interactions and descriptions for semantic image segmentation

TitleLearning object interactions and descriptions for semantic image segmentation
Authors
Issue Date2017
Citation
Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 2017, v. 2017-January, p. 5235-5243 How to Cite?
Abstract© 2017 IEEE. Recent advanced deep convolutional networks (CNNs) achieved great successes in many computer vision tasks, because of their compelling learning complexity and the presences of large-scale labeled data. However, as obtaining per-pixel annotations is expensive, performances of CNNs in semantic image segmentation are not fully exploited. This work significantly increases segmentation accuracy of CNNs by learning from an Image Descriptions in the Wild (IDW) dataset. Unlike previous image captioning datasets, where captions were manually and densely annotated, images and their descriptions in IDW are automatically downloaded from Internet without any manual cleaning and refinement. An IDW-CNN is proposed to jointly train IDW and existing image segmentation dataset such as Pascal VOC 2012 (VOC). It has two appealing properties. First, knowledge from different datasets can be fully explored and transferred from each other to improve performance. Second, segmentation accuracy in VOC can be constantly increased when selecting more data from IDW. Extensive experiments demonstrate the effectiveness and scalability of IDW-CNN, which outperforms existing best-performing system by 12% on VOC12 test set.
Persistent Identifierhttp://hdl.handle.net/10722/273608
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorWang, Guangrun-
dc.contributor.authorLuo, Ping-
dc.contributor.authorLin, Liang-
dc.contributor.authorWang, Xiaogang-
dc.date.accessioned2019-08-12T09:56:08Z-
dc.date.available2019-08-12T09:56:08Z-
dc.date.issued2017-
dc.identifier.citationProceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 2017, v. 2017-January, p. 5235-5243-
dc.identifier.urihttp://hdl.handle.net/10722/273608-
dc.description.abstract© 2017 IEEE. Recent advanced deep convolutional networks (CNNs) achieved great successes in many computer vision tasks, because of their compelling learning complexity and the presences of large-scale labeled data. However, as obtaining per-pixel annotations is expensive, performances of CNNs in semantic image segmentation are not fully exploited. This work significantly increases segmentation accuracy of CNNs by learning from an Image Descriptions in the Wild (IDW) dataset. Unlike previous image captioning datasets, where captions were manually and densely annotated, images and their descriptions in IDW are automatically downloaded from Internet without any manual cleaning and refinement. An IDW-CNN is proposed to jointly train IDW and existing image segmentation dataset such as Pascal VOC 2012 (VOC). It has two appealing properties. First, knowledge from different datasets can be fully explored and transferred from each other to improve performance. Second, segmentation accuracy in VOC can be constantly increased when selecting more data from IDW. Extensive experiments demonstrate the effectiveness and scalability of IDW-CNN, which outperforms existing best-performing system by 12% on VOC12 test set.-
dc.languageeng-
dc.relation.ispartofProceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017-
dc.titleLearning object interactions and descriptions for semantic image segmentation-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/CVPR.2017.556-
dc.identifier.scopuseid_2-s2.0-85041900256-
dc.identifier.volume2017-January-
dc.identifier.spage5235-
dc.identifier.epage5243-
dc.identifier.isiWOS:000418371405035-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats