File Download

There are no files associated with this item.

Supplementary

Conference Paper: Compressed video contrastive learning

TitleCompressed video contrastive learning
Authors
KeywordsSelf-supervised learning
Contrastive learning
Representation learning
Issue Date2021
PublisherNeural Information Processing Systems Foundation.
Citation
35th Conference on Neural Information Processing Systems (NeurlPS 2021) (Virutal), December 6-14, 2021. In Advances in Neural Information Processing Systems 34 (NeurIPS 2021), p. 14176-14187 How to Cite?
AbstractThis work concerns self-supervised video representation learning (SSVRL), one topic that has received much attention recently. Since videos are storage-intensive and contain a rich source of visual content, models designed for SSVRL are expected to be storage- and computation-efficient, as well as effective. However, most existing methods only focus on one of the two objectives, failing to consider both at the same time. In this work, for the first time, the seemingly contradictory goals are simultaneously achieved by exploiting compressed videos and capturing mutual information between two input streams. Specifically, a novel Motion Vector based Cross Guidance Contrastive learning approach (MVCGC) is proposed. For storage and computation efficiency, we choose to directly decode RGB frames and motion vectors (that resemble low-resolution optical flows) from compressed videos on-the-fly. To enhance the representation ability of the motion vectors, hence the effectiveness of our method, we design a cross guidance contrastive learning algorithm based on multi-instance InfoNCE loss, where motion vectors can take supervision signals from RGB frames and vice versa. Comprehensive experiments on two downstream tasks show that our MVCGC yields new state-of-the-art while being significantly more efficient than its competitors.
Persistent Identifierhttp://hdl.handle.net/10722/315681

 

DC FieldValueLanguage
dc.contributor.authorHuo, Y-
dc.contributor.authorDing, M-
dc.contributor.authorLu, H-
dc.contributor.authorFei, N-
dc.contributor.authorLu, Z-
dc.contributor.authorWen, J-
dc.contributor.authorLuo, P-
dc.date.accessioned2022-08-19T09:02:27Z-
dc.date.available2022-08-19T09:02:27Z-
dc.date.issued2021-
dc.identifier.citation35th Conference on Neural Information Processing Systems (NeurlPS 2021) (Virutal), December 6-14, 2021. In Advances in Neural Information Processing Systems 34 (NeurIPS 2021), p. 14176-14187-
dc.identifier.urihttp://hdl.handle.net/10722/315681-
dc.description.abstractThis work concerns self-supervised video representation learning (SSVRL), one topic that has received much attention recently. Since videos are storage-intensive and contain a rich source of visual content, models designed for SSVRL are expected to be storage- and computation-efficient, as well as effective. However, most existing methods only focus on one of the two objectives, failing to consider both at the same time. In this work, for the first time, the seemingly contradictory goals are simultaneously achieved by exploiting compressed videos and capturing mutual information between two input streams. Specifically, a novel Motion Vector based Cross Guidance Contrastive learning approach (MVCGC) is proposed. For storage and computation efficiency, we choose to directly decode RGB frames and motion vectors (that resemble low-resolution optical flows) from compressed videos on-the-fly. To enhance the representation ability of the motion vectors, hence the effectiveness of our method, we design a cross guidance contrastive learning algorithm based on multi-instance InfoNCE loss, where motion vectors can take supervision signals from RGB frames and vice versa. Comprehensive experiments on two downstream tasks show that our MVCGC yields new state-of-the-art while being significantly more efficient than its competitors.-
dc.languageeng-
dc.publisherNeural Information Processing Systems Foundation.-
dc.relation.ispartofAdvances in Neural Information Processing Systems 34 (NeurIPS 2021)-
dc.subjectSelf-supervised learning-
dc.subjectContrastive learning-
dc.subjectRepresentation learning-
dc.titleCompressed video contrastive learning-
dc.typeConference_Paper-
dc.identifier.emailLuo, P: pluo@hku.hk-
dc.identifier.authorityLuo, P=rp02575-
dc.identifier.hkuros335595-
dc.identifier.spage14176-
dc.identifier.epage14187-
dc.publisher.placeUnited States-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats