File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Revisiting Skeleton-based Action Recognition

TitleRevisiting Skeleton-based Action Recognition
Authors
KeywordsAction and event recognition
Video analysis and understanding
Issue Date2022
Citation
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2022, v. 2022-June, p. 2959-2968 How to Cite?
AbstractHuman skeleton, as a compact representation of human action, has received increasing attention in recent years. Many skeleton-based action recognition methods adopt GCNs to extract features on top of human skeletons. Despite the positive results shown in these attempts, GCN-based methods are subject to limitations in robustness, interoperability, and scalability. In this work, we propose PoseConv3D, a new approach to skeleton-based action recognition. PoseConv3D relies on a 3D heatmap volume instead of a graph sequence as the base representation of human skeletons. Compared to GCN-based methods, PoseConv3D is more effective in learning spatiotemporal features, more robust against pose estimation noises, and generalizes better in cross-dataset settings. Also, PoseConv3D can handle multiple-person scenarios without additional computation costs. The hierarchical features can be easily integrated with other modalities at early fusion stages, providing a great design space to boost the performance. PoseConv3D achieves the state-of-the-art on five of six standard skeleton-based action recognition benchmarks. Once fused with other modalities, it achieves the state-of-the-art on all eight multi-modality action recognition benchmarks. Code has been made available at: https://github.com/kennymckormick/pyskl.
Persistent Identifierhttp://hdl.handle.net/10722/352321
ISSN
2023 SCImago Journal Rankings: 10.331
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorDuan, Haodong-
dc.contributor.authorZhao, Yue-
dc.contributor.authorChen, Kai-
dc.contributor.authorLin, Dahua-
dc.contributor.authorDai, Bo-
dc.date.accessioned2024-12-16T03:58:15Z-
dc.date.available2024-12-16T03:58:15Z-
dc.date.issued2022-
dc.identifier.citationProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2022, v. 2022-June, p. 2959-2968-
dc.identifier.issn1063-6919-
dc.identifier.urihttp://hdl.handle.net/10722/352321-
dc.description.abstractHuman skeleton, as a compact representation of human action, has received increasing attention in recent years. Many skeleton-based action recognition methods adopt GCNs to extract features on top of human skeletons. Despite the positive results shown in these attempts, GCN-based methods are subject to limitations in robustness, interoperability, and scalability. In this work, we propose PoseConv3D, a new approach to skeleton-based action recognition. PoseConv3D relies on a 3D heatmap volume instead of a graph sequence as the base representation of human skeletons. Compared to GCN-based methods, PoseConv3D is more effective in learning spatiotemporal features, more robust against pose estimation noises, and generalizes better in cross-dataset settings. Also, PoseConv3D can handle multiple-person scenarios without additional computation costs. The hierarchical features can be easily integrated with other modalities at early fusion stages, providing a great design space to boost the performance. PoseConv3D achieves the state-of-the-art on five of six standard skeleton-based action recognition benchmarks. Once fused with other modalities, it achieves the state-of-the-art on all eight multi-modality action recognition benchmarks. Code has been made available at: https://github.com/kennymckormick/pyskl.-
dc.languageeng-
dc.relation.ispartofProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition-
dc.subjectAction and event recognition-
dc.subjectVideo analysis and understanding-
dc.titleRevisiting Skeleton-based Action Recognition-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/CVPR52688.2022.00298-
dc.identifier.scopuseid_2-s2.0-85141766468-
dc.identifier.volume2022-June-
dc.identifier.spage2959-
dc.identifier.epage2968-
dc.identifier.isiWOS:000867754203022-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats