File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/CVPR.2019.00097
- Scopus: eid_2-s2.0-85073466198
- WOS: WOS:000529484001003
- Find via
Supplementary
- Citations:
- Appears in Collections:
Conference Paper: Unsupervised moving object detection via contextual information separation
Title | Unsupervised moving object detection via contextual information separation |
---|---|
Authors | |
Keywords | Grouping and Shape Representation Learning Scene Analysis and Understanding Segmentation Statistical Learning |
Issue Date | 2019 |
Citation | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2019, v. 2019-June, p. 879-888 How to Cite? |
Abstract | We propose an adversarial contextual model for detecting moving objects in images. A deep neural network is trained to predict the optical flow in a region using information from everywhere else but that region (context), while another network attempts to make such context as uninformative as possible. The result is a model where hypotheses naturally compete with no need for explicit regularization or hyper-parameter tuning. Although our method requires no supervision whatsoever, it outperforms several methods that are pre-trained on large annotated datasets. Our model can be thought of as a generalization of classical variational generative region-based segmentation, but in a way that avoids explicit regularization or solution of partial differential equations at run-time. |
Persistent Identifier | http://hdl.handle.net/10722/325447 |
ISSN | 2023 SCImago Journal Rankings: 10.331 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Yang, Yanchao | - |
dc.contributor.author | Loquercio, Antonio | - |
dc.contributor.author | Scaramuzza, Davide | - |
dc.contributor.author | Soatto, Stefano | - |
dc.date.accessioned | 2023-02-27T07:33:24Z | - |
dc.date.available | 2023-02-27T07:33:24Z | - |
dc.date.issued | 2019 | - |
dc.identifier.citation | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2019, v. 2019-June, p. 879-888 | - |
dc.identifier.issn | 1063-6919 | - |
dc.identifier.uri | http://hdl.handle.net/10722/325447 | - |
dc.description.abstract | We propose an adversarial contextual model for detecting moving objects in images. A deep neural network is trained to predict the optical flow in a region using information from everywhere else but that region (context), while another network attempts to make such context as uninformative as possible. The result is a model where hypotheses naturally compete with no need for explicit regularization or hyper-parameter tuning. Although our method requires no supervision whatsoever, it outperforms several methods that are pre-trained on large annotated datasets. Our model can be thought of as a generalization of classical variational generative region-based segmentation, but in a way that avoids explicit regularization or solution of partial differential equations at run-time. | - |
dc.language | eng | - |
dc.relation.ispartof | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition | - |
dc.subject | Grouping and Shape | - |
dc.subject | Representation Learning | - |
dc.subject | Scene Analysis and Understanding | - |
dc.subject | Segmentation | - |
dc.subject | Statistical Learning | - |
dc.title | Unsupervised moving object detection via contextual information separation | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/CVPR.2019.00097 | - |
dc.identifier.scopus | eid_2-s2.0-85073466198 | - |
dc.identifier.volume | 2019-June | - |
dc.identifier.spage | 879 | - |
dc.identifier.epage | 888 | - |
dc.identifier.isi | WOS:000529484001003 | - |