File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/CVPR.2014.405
- Scopus: eid_2-s2.0-84911424892
- WOS: WOS:000361555603029
- Find via
Supplementary
- Citations:
- Appears in Collections:
Conference Paper: Object-based multiple foreground video co-segmentation
Title | Object-based multiple foreground video co-segmentation |
---|---|
Authors | |
Keywords | co-segmentation multi-state selection graph multiple foreground video segmentation |
Issue Date | 2014 |
Citation | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2014, p. 3166-3173 How to Cite? |
Abstract | We present a video co-segmentation method that uses category-independent object proposals as its basic element and can extract multiple foreground objects in a video set. The use of object elements overcomes limitations of low-level feature representations in separating complex foregrounds and backgrounds. We formulate object-based co-segmentation as a co-selection graph in which regions with foreground-like characteristics are favored while also accounting for intra-video and inter-video foreground coherence. To handle multiple foreground objects, we expand the co-selection graph model into a proposed multi-state selection graph model (MSG) that optimizes the segmentations of different objects jointly. This extension into the MSG can be applied not only to our co-selection graph, but also can be used to turn any standard graph model into a multi-state selection solution that can be optimized directly by the existing energy minimization techniques. Our experiments show that our object-based multiple foreground video co-segmentation method (ObMiC) compares well to related techniques on both single and multiple foreground cases. |
Persistent Identifier | http://hdl.handle.net/10722/321619 |
ISSN | 2023 SCImago Journal Rankings: 10.331 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Fu, Huazhu | - |
dc.contributor.author | Xu, Dong | - |
dc.contributor.author | Zhang, Bao | - |
dc.contributor.author | Lin, Stephen | - |
dc.date.accessioned | 2022-11-03T02:20:16Z | - |
dc.date.available | 2022-11-03T02:20:16Z | - |
dc.date.issued | 2014 | - |
dc.identifier.citation | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2014, p. 3166-3173 | - |
dc.identifier.issn | 1063-6919 | - |
dc.identifier.uri | http://hdl.handle.net/10722/321619 | - |
dc.description.abstract | We present a video co-segmentation method that uses category-independent object proposals as its basic element and can extract multiple foreground objects in a video set. The use of object elements overcomes limitations of low-level feature representations in separating complex foregrounds and backgrounds. We formulate object-based co-segmentation as a co-selection graph in which regions with foreground-like characteristics are favored while also accounting for intra-video and inter-video foreground coherence. To handle multiple foreground objects, we expand the co-selection graph model into a proposed multi-state selection graph model (MSG) that optimizes the segmentations of different objects jointly. This extension into the MSG can be applied not only to our co-selection graph, but also can be used to turn any standard graph model into a multi-state selection solution that can be optimized directly by the existing energy minimization techniques. Our experiments show that our object-based multiple foreground video co-segmentation method (ObMiC) compares well to related techniques on both single and multiple foreground cases. | - |
dc.language | eng | - |
dc.relation.ispartof | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition | - |
dc.subject | co-segmentation | - |
dc.subject | multi-state selection graph | - |
dc.subject | multiple foreground | - |
dc.subject | video segmentation | - |
dc.title | Object-based multiple foreground video co-segmentation | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/CVPR.2014.405 | - |
dc.identifier.scopus | eid_2-s2.0-84911424892 | - |
dc.identifier.spage | 3166 | - |
dc.identifier.epage | 3173 | - |
dc.identifier.isi | WOS:000361555603029 | - |