File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Object-based multiple foreground video co-segmentation

TitleObject-based multiple foreground video co-segmentation
Authors
Keywordsco-segmentation
multi-state selection graph
multiple foreground
video segmentation
Issue Date2014
Citation
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2014, p. 3166-3173 How to Cite?
AbstractWe present a video co-segmentation method that uses category-independent object proposals as its basic element and can extract multiple foreground objects in a video set. The use of object elements overcomes limitations of low-level feature representations in separating complex foregrounds and backgrounds. We formulate object-based co-segmentation as a co-selection graph in which regions with foreground-like characteristics are favored while also accounting for intra-video and inter-video foreground coherence. To handle multiple foreground objects, we expand the co-selection graph model into a proposed multi-state selection graph model (MSG) that optimizes the segmentations of different objects jointly. This extension into the MSG can be applied not only to our co-selection graph, but also can be used to turn any standard graph model into a multi-state selection solution that can be optimized directly by the existing energy minimization techniques. Our experiments show that our object-based multiple foreground video co-segmentation method (ObMiC) compares well to related techniques on both single and multiple foreground cases.
Persistent Identifierhttp://hdl.handle.net/10722/321619
ISSN
2023 SCImago Journal Rankings: 10.331
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorFu, Huazhu-
dc.contributor.authorXu, Dong-
dc.contributor.authorZhang, Bao-
dc.contributor.authorLin, Stephen-
dc.date.accessioned2022-11-03T02:20:16Z-
dc.date.available2022-11-03T02:20:16Z-
dc.date.issued2014-
dc.identifier.citationProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2014, p. 3166-3173-
dc.identifier.issn1063-6919-
dc.identifier.urihttp://hdl.handle.net/10722/321619-
dc.description.abstractWe present a video co-segmentation method that uses category-independent object proposals as its basic element and can extract multiple foreground objects in a video set. The use of object elements overcomes limitations of low-level feature representations in separating complex foregrounds and backgrounds. We formulate object-based co-segmentation as a co-selection graph in which regions with foreground-like characteristics are favored while also accounting for intra-video and inter-video foreground coherence. To handle multiple foreground objects, we expand the co-selection graph model into a proposed multi-state selection graph model (MSG) that optimizes the segmentations of different objects jointly. This extension into the MSG can be applied not only to our co-selection graph, but also can be used to turn any standard graph model into a multi-state selection solution that can be optimized directly by the existing energy minimization techniques. Our experiments show that our object-based multiple foreground video co-segmentation method (ObMiC) compares well to related techniques on both single and multiple foreground cases.-
dc.languageeng-
dc.relation.ispartofProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition-
dc.subjectco-segmentation-
dc.subjectmulti-state selection graph-
dc.subjectmultiple foreground-
dc.subjectvideo segmentation-
dc.titleObject-based multiple foreground video co-segmentation-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/CVPR.2014.405-
dc.identifier.scopuseid_2-s2.0-84911424892-
dc.identifier.spage3166-
dc.identifier.epage3173-
dc.identifier.isiWOS:000361555603029-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats