File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Intra- and inter-action understanding via temporal action parsing

TitleIntra- and inter-action understanding via temporal action parsing
Authors
Issue Date2020
Citation
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2020, p. 727-736 How to Cite?
AbstractCurrent methods for action recognition primarily rely on deep convolutional networks to derive feature embeddings of visual and motion features. While these methods have demonstrated remarkable performance on standard benchmarks, we are still in need of a better understanding as to how the videos, in particular their internal structures, relate to high-level semantics, which may lead to benefits in multiple aspects, e.g. interpretable predictions and even new methods that can take the recognition performances to a next level. Towards this goal, we construct TAPOS, a new dataset developed on sport videos with manual annotations of sub-actions, and conduct a study on temporal action parsing on top 1. Our study shows that a sport activity usually consists of multiple sub-actions and that the awareness of such temporal structures is beneficial to action recognition. We also investigate a number of temporal parsing methods, and thereon devise an improved method that is capable of mining sub-actions from training data without knowing the labels of them. On the constructed TAPOS, the proposed method is shown to reveal intra-action information, i.e. how action instances are made of sub-actions, and inter-action information, i.e. one specific sub-action may commonly appear in various actions.
Persistent Identifierhttp://hdl.handle.net/10722/352214
ISSN
2023 SCImago Journal Rankings: 10.331
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorShao, Dian-
dc.contributor.authorZhao, Yue-
dc.contributor.authorDai, Bo-
dc.contributor.authorLin, Dahua-
dc.date.accessioned2024-12-16T03:57:21Z-
dc.date.available2024-12-16T03:57:21Z-
dc.date.issued2020-
dc.identifier.citationProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2020, p. 727-736-
dc.identifier.issn1063-6919-
dc.identifier.urihttp://hdl.handle.net/10722/352214-
dc.description.abstractCurrent methods for action recognition primarily rely on deep convolutional networks to derive feature embeddings of visual and motion features. While these methods have demonstrated remarkable performance on standard benchmarks, we are still in need of a better understanding as to how the videos, in particular their internal structures, relate to high-level semantics, which may lead to benefits in multiple aspects, e.g. interpretable predictions and even new methods that can take the recognition performances to a next level. Towards this goal, we construct TAPOS, a new dataset developed on sport videos with manual annotations of sub-actions, and conduct a study on temporal action parsing on top 1. Our study shows that a sport activity usually consists of multiple sub-actions and that the awareness of such temporal structures is beneficial to action recognition. We also investigate a number of temporal parsing methods, and thereon devise an improved method that is capable of mining sub-actions from training data without knowing the labels of them. On the constructed TAPOS, the proposed method is shown to reveal intra-action information, i.e. how action instances are made of sub-actions, and inter-action information, i.e. one specific sub-action may commonly appear in various actions.-
dc.languageeng-
dc.relation.ispartofProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition-
dc.titleIntra- and inter-action understanding via temporal action parsing-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/CVPR42600.2020.00081-
dc.identifier.scopuseid_2-s2.0-85094159450-
dc.identifier.spage727-
dc.identifier.epage736-
dc.identifier.isiWOS:000620679500074-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats