File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Self-occlusions and disocclusions in causal video object segmentation

TitleSelf-occlusions and disocclusions in causal video object segmentation
Authors
Issue Date2015
Citation
Proceedings of the IEEE International Conference on Computer Vision, 2015, v. 2015 International Conference on Computer Vision, ICCV 2015, p. 4408-4416 How to Cite?
AbstractWe propose a method to detect disocclusion in video sequences of three-dimensional scenes and to partition the disoccluded regions into objects, defined by coherent deformation corresponding to surfaces in the scene. Our method infers deformation fields that are piecewise smooth by construction without the need for an explicit regularizer and the associated choice of weight. It then partitions the disoccluded region and groups its components with objects by leveraging on the complementarity of motion and appearance cues: Where appearance changes within an object, motion can usually be reliably inferred and used for grouping. Where appearance is close to constant, it can be used for grouping directly. We integrate both cues in an energy minimization framework, incorporate prior assumptions explicitly into the energy, and propose a numerical scheme.
Persistent Identifierhttp://hdl.handle.net/10722/325634
ISSN
2020 SCImago Journal Rankings: 4.133

 

DC FieldValueLanguage
dc.contributor.authorYang, Yanchao-
dc.contributor.authorSundaramoorthi, Ganesh-
dc.contributor.authorSoatto, Stefano-
dc.date.accessioned2023-02-27T07:34:55Z-
dc.date.available2023-02-27T07:34:55Z-
dc.date.issued2015-
dc.identifier.citationProceedings of the IEEE International Conference on Computer Vision, 2015, v. 2015 International Conference on Computer Vision, ICCV 2015, p. 4408-4416-
dc.identifier.issn1550-5499-
dc.identifier.urihttp://hdl.handle.net/10722/325634-
dc.description.abstractWe propose a method to detect disocclusion in video sequences of three-dimensional scenes and to partition the disoccluded regions into objects, defined by coherent deformation corresponding to surfaces in the scene. Our method infers deformation fields that are piecewise smooth by construction without the need for an explicit regularizer and the associated choice of weight. It then partitions the disoccluded region and groups its components with objects by leveraging on the complementarity of motion and appearance cues: Where appearance changes within an object, motion can usually be reliably inferred and used for grouping. Where appearance is close to constant, it can be used for grouping directly. We integrate both cues in an energy minimization framework, incorporate prior assumptions explicitly into the energy, and propose a numerical scheme.-
dc.languageeng-
dc.relation.ispartofProceedings of the IEEE International Conference on Computer Vision-
dc.titleSelf-occlusions and disocclusions in causal video object segmentation-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/ICCV.2015.501-
dc.identifier.scopuseid_2-s2.0-84973884813-
dc.identifier.volume2015 International Conference on Computer Vision, ICCV 2015-
dc.identifier.spage4408-
dc.identifier.epage4416-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats