File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Switchable deep network for pedestrian detection

TitleSwitchable deep network for pedestrian detection
Authors
Issue Date2014
Citation
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2014, p. 899-905 How to Cite?
Abstract© 2014 IEEE. In this paper, we propose a Switchable Deep Network (SDN) for pedestrian detection. The SDN automatically learns hierarchical features, salience maps, and mixture representations of different body parts. Pedestrian detection faces the challenges of background clutter and large variations of pedestrian appearance due to pose and viewpoint changes and other factors. One of our key contributions is to propose a Switchable Restricted Boltzmann Machine (SRBM) to explicitly model the complex mixture of visual variations at multiple levels. At the feature levels, it automatically estimates saliency maps for each test sample in order to separate background clutters from discriminative regions for pedestrian detection. At the part and body levels, it is able to infer the most appropriate template for the mixture models of each part and the whole body. We have devised a new generative algorithm to effectively pretrain the SDN and then fine-tune it with back-propagation. Our approach is evaluated on the Caltech and ETH datasets and achieves the state-of-the-art detection performance.
Persistent Identifierhttp://hdl.handle.net/10722/273671
ISSN
2020 SCImago Journal Rankings: 4.658
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorLuo, Ping-
dc.contributor.authorTian, Yonglong-
dc.contributor.authorWang, Xiaogang-
dc.contributor.authorTang, Xiaoou-
dc.date.accessioned2019-08-12T09:56:19Z-
dc.date.available2019-08-12T09:56:19Z-
dc.date.issued2014-
dc.identifier.citationProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2014, p. 899-905-
dc.identifier.issn1063-6919-
dc.identifier.urihttp://hdl.handle.net/10722/273671-
dc.description.abstract© 2014 IEEE. In this paper, we propose a Switchable Deep Network (SDN) for pedestrian detection. The SDN automatically learns hierarchical features, salience maps, and mixture representations of different body parts. Pedestrian detection faces the challenges of background clutter and large variations of pedestrian appearance due to pose and viewpoint changes and other factors. One of our key contributions is to propose a Switchable Restricted Boltzmann Machine (SRBM) to explicitly model the complex mixture of visual variations at multiple levels. At the feature levels, it automatically estimates saliency maps for each test sample in order to separate background clutters from discriminative regions for pedestrian detection. At the part and body levels, it is able to infer the most appropriate template for the mixture models of each part and the whole body. We have devised a new generative algorithm to effectively pretrain the SDN and then fine-tune it with back-propagation. Our approach is evaluated on the Caltech and ETH datasets and achieves the state-of-the-art detection performance.-
dc.languageeng-
dc.relation.ispartofProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition-
dc.titleSwitchable deep network for pedestrian detection-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/CVPR.2014.120-
dc.identifier.scopuseid_2-s2.0-84911449919-
dc.identifier.spage899-
dc.identifier.epage905-
dc.identifier.isiWOS:000361555600114-
dc.identifier.issnl1063-6919-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats