File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: DystAb: Unsupervised Object Segmentation via Dynamic-Static Bootstrapping

TitleDystAb: Unsupervised Object Segmentation via Dynamic-Static Bootstrapping
Authors
Issue Date2021
Citation
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2021, p. 2825-2835 How to Cite?
AbstractWe describe an unsupervised method to detect and segment portions of images of live scenes that, at some point in time, are seen moving as a coherent whole, which we refer to as objects. Our method first partitions the motion field by minimizing the mutual information between segments. Then, it uses the segments to learn object models that can be used for detection in a static image. Static and dynamic models are represented by deep neural networks trained jointly in a bootstrapping strategy, which enables extrapolation to previously unseen objects. While the training process requires motion, the resulting object segmentation network can be used on either static images or videos at inference time. As the volume of seen videos grows, more and more objects are seen moving, priming their detection, which then serves as a regularizer for new objects, turning our method into unsupervised continual learning to segment objects. Our models are compared to the state of the art in both video object segmentation and salient object detection. In the six benchmark datasets tested, our models compare favorably even to those using pixel-level supervision, despite requiring no manual annotation.
Persistent Identifierhttp://hdl.handle.net/10722/325533
ISSN
2020 SCImago Journal Rankings: 4.658

 

DC FieldValueLanguage
dc.contributor.authorYang, Yanchao-
dc.contributor.authorLai, Brian-
dc.contributor.authorSoatto, Stefano-
dc.date.accessioned2023-02-27T07:34:04Z-
dc.date.available2023-02-27T07:34:04Z-
dc.date.issued2021-
dc.identifier.citationProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2021, p. 2825-2835-
dc.identifier.issn1063-6919-
dc.identifier.urihttp://hdl.handle.net/10722/325533-
dc.description.abstractWe describe an unsupervised method to detect and segment portions of images of live scenes that, at some point in time, are seen moving as a coherent whole, which we refer to as objects. Our method first partitions the motion field by minimizing the mutual information between segments. Then, it uses the segments to learn object models that can be used for detection in a static image. Static and dynamic models are represented by deep neural networks trained jointly in a bootstrapping strategy, which enables extrapolation to previously unseen objects. While the training process requires motion, the resulting object segmentation network can be used on either static images or videos at inference time. As the volume of seen videos grows, more and more objects are seen moving, priming their detection, which then serves as a regularizer for new objects, turning our method into unsupervised continual learning to segment objects. Our models are compared to the state of the art in both video object segmentation and salient object detection. In the six benchmark datasets tested, our models compare favorably even to those using pixel-level supervision, despite requiring no manual annotation.-
dc.languageeng-
dc.relation.ispartofProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition-
dc.titleDystAb: Unsupervised Object Segmentation via Dynamic-Static Bootstrapping-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/CVPR46437.2021.00285-
dc.identifier.scopuseid_2-s2.0-85111233679-
dc.identifier.spage2825-
dc.identifier.epage2835-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats