File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Joint semantic segmentation by searching for compatible-competitive references

TitleJoint semantic segmentation by searching for compatible-competitive references
Authors
Keywordsimage search
label propagation
scene understanding
semantic segmentation
Issue Date2012
Citation
MM 2012 - Proceedings of the 20th ACM International Conference on Multimedia, 2012, p. 777-780 How to Cite?
AbstractThis paper presents a framework for semantically segmenting a target image without tags by searching for references in an image database, where all the images are unsegmented but annotated with tags. We jointly segment the target image and its references by optimizing both semantic consistencies within individual images and correspondences between the target image and each of its references. In our framework, we first retrieve two types of references with a semantic-driven scheme: i) the compatible references which share similar global appearance with the target image; and ii) the competitive references which have distinct appearance to the target image but similar tags with one of the compatible references. The two types of references have complementary information for assisting the segmentation of the target image. Then we construct a novel graphical representation, in which the vertices are superpixels extracted from the target image and its references. The segmentation problem is posed as labeling all the vertices with the semantic tags obtained from the references. The method is able to label images without the pixel-level annotation and classifier training, and it outperforms the state-of-the-arts approaches on the MSRC-21 database. © 2012 ACM.
Persistent Identifierhttp://hdl.handle.net/10722/273522

 

DC FieldValueLanguage
dc.contributor.authorLuo, Ping-
dc.contributor.authorWang, Xiaogang-
dc.contributor.authorLin, Liang-
dc.contributor.authorTang, Xiaoou-
dc.date.accessioned2019-08-12T09:55:50Z-
dc.date.available2019-08-12T09:55:50Z-
dc.date.issued2012-
dc.identifier.citationMM 2012 - Proceedings of the 20th ACM International Conference on Multimedia, 2012, p. 777-780-
dc.identifier.urihttp://hdl.handle.net/10722/273522-
dc.description.abstractThis paper presents a framework for semantically segmenting a target image without tags by searching for references in an image database, where all the images are unsegmented but annotated with tags. We jointly segment the target image and its references by optimizing both semantic consistencies within individual images and correspondences between the target image and each of its references. In our framework, we first retrieve two types of references with a semantic-driven scheme: i) the compatible references which share similar global appearance with the target image; and ii) the competitive references which have distinct appearance to the target image but similar tags with one of the compatible references. The two types of references have complementary information for assisting the segmentation of the target image. Then we construct a novel graphical representation, in which the vertices are superpixels extracted from the target image and its references. The segmentation problem is posed as labeling all the vertices with the semantic tags obtained from the references. The method is able to label images without the pixel-level annotation and classifier training, and it outperforms the state-of-the-arts approaches on the MSRC-21 database. © 2012 ACM.-
dc.languageeng-
dc.relation.ispartofMM 2012 - Proceedings of the 20th ACM International Conference on Multimedia-
dc.subjectimage search-
dc.subjectlabel propagation-
dc.subjectscene understanding-
dc.subjectsemantic segmentation-
dc.titleJoint semantic segmentation by searching for compatible-competitive references-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1145/2393347.2396310-
dc.identifier.scopuseid_2-s2.0-84871376194-
dc.identifier.spage777-
dc.identifier.epage780-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats