File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Action recognition using multilevel features and latent structural SVM

TitleAction recognition using multilevel features and latent structural SVM
Authors
KeywordsAction recognition
action-scene interaction
latent structural SVM
multilevel features
Issue Date2013
Citation
IEEE Transactions on Circuits and Systems for Video Technology, 2013, v. 23, n. 8, p. 1422-1431 How to Cite?
AbstractWe first propose a new low-level visual feature, called spatio-temporal context distribution feature of interest points, to describe human actions. Each action video is expressed as a set of relative XYT coordinates between pairwise interest points in a local region. We learn a global Gaussian mixture model (GMM) (referred to as a universal background model) using the relative coordinate features from all the training videos, and then we represent each video as the normalized parameters of a video-specific GMM adapted from the global GMM. In order to capture the spatio-temporal relationships at different levels, multiple GMMs are utilized to describe the context distributions of interest points over multiscale local regions. Motivated by the observation that some actions share similar motion patterns, we additionally propose a novel mid-level class correlation feature to capture the semantic correlations between different action classes. Each input action video is represented by a set of decision values obtained from the pre-learned classifiers of all the action classes, with each decision value measuring the likelihood that the input video belongs to the corresponding action class. Moreover, human actions are often associated with some specific natural environments and also exhibit high correlation with particular scene classes. It is therefore beneficial to utilize the contextual scene information for action recognition. In this paper, we build the high-level co-occurrence relationship between action classes and scene classes to discover the mutual contextual constraints between action and scene. By treating the scene class label as a latent variable, we propose to use the latent structural SVM (LSSVM) model to jointly capture the compatibility between multilevel action features (e.g., low-level visual context distribution feature and the corresponding mid-level class correlation feature) and action classes, the compatibility between multilevel scene features (i.e., SIFT feature and the corresponding class correlation feature) and scene classes, and the contextual relationship between action classes and scene classes. Extensive experiments on UCF Sports, YouTube and UCF50 datasets demonstrate the effectiveness of the proposed multilevel features and action-scene interaction based LSSVM model for human action recognition. Moreover, our method generally achieves higher recognition accuracy than other state-of-the-art methods on these datasets. © 1991-2012 IEEE.
Persistent Identifierhttp://hdl.handle.net/10722/321522
ISSN
2023 Impact Factor: 8.3
2023 SCImago Journal Rankings: 2.299
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorWu, Xinxiao-
dc.contributor.authorXu, Dong-
dc.contributor.authorDuan, Lixin-
dc.contributor.authorLuo, Jiebo-
dc.contributor.authorJia, Yunde-
dc.date.accessioned2022-11-03T02:19:30Z-
dc.date.available2022-11-03T02:19:30Z-
dc.date.issued2013-
dc.identifier.citationIEEE Transactions on Circuits and Systems for Video Technology, 2013, v. 23, n. 8, p. 1422-1431-
dc.identifier.issn1051-8215-
dc.identifier.urihttp://hdl.handle.net/10722/321522-
dc.description.abstractWe first propose a new low-level visual feature, called spatio-temporal context distribution feature of interest points, to describe human actions. Each action video is expressed as a set of relative XYT coordinates between pairwise interest points in a local region. We learn a global Gaussian mixture model (GMM) (referred to as a universal background model) using the relative coordinate features from all the training videos, and then we represent each video as the normalized parameters of a video-specific GMM adapted from the global GMM. In order to capture the spatio-temporal relationships at different levels, multiple GMMs are utilized to describe the context distributions of interest points over multiscale local regions. Motivated by the observation that some actions share similar motion patterns, we additionally propose a novel mid-level class correlation feature to capture the semantic correlations between different action classes. Each input action video is represented by a set of decision values obtained from the pre-learned classifiers of all the action classes, with each decision value measuring the likelihood that the input video belongs to the corresponding action class. Moreover, human actions are often associated with some specific natural environments and also exhibit high correlation with particular scene classes. It is therefore beneficial to utilize the contextual scene information for action recognition. In this paper, we build the high-level co-occurrence relationship between action classes and scene classes to discover the mutual contextual constraints between action and scene. By treating the scene class label as a latent variable, we propose to use the latent structural SVM (LSSVM) model to jointly capture the compatibility between multilevel action features (e.g., low-level visual context distribution feature and the corresponding mid-level class correlation feature) and action classes, the compatibility between multilevel scene features (i.e., SIFT feature and the corresponding class correlation feature) and scene classes, and the contextual relationship between action classes and scene classes. Extensive experiments on UCF Sports, YouTube and UCF50 datasets demonstrate the effectiveness of the proposed multilevel features and action-scene interaction based LSSVM model for human action recognition. Moreover, our method generally achieves higher recognition accuracy than other state-of-the-art methods on these datasets. © 1991-2012 IEEE.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Circuits and Systems for Video Technology-
dc.subjectAction recognition-
dc.subjectaction-scene interaction-
dc.subjectlatent structural SVM-
dc.subjectmultilevel features-
dc.titleAction recognition using multilevel features and latent structural SVM-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TCSVT.2013.2244794-
dc.identifier.scopuseid_2-s2.0-84881455497-
dc.identifier.volume23-
dc.identifier.issue8-
dc.identifier.spage1422-
dc.identifier.epage1431-
dc.identifier.isiWOS:000322671600013-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats