File Download

There are no files associated with this item.

Supplementary

Conference Paper: Language as queries for referring video object segmentation

TitleLanguage as queries for referring video object segmentation
Authors
Issue Date2022
PublisherIEEE Computer Society.
Citation
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (Virtual), New Orleans, Louisiana, USA, 19-24 June, 2022. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, p. 4974-4984 How to Cite?
AbstractReferring video object segmentation (R-VOS) is an emerging cross-modal task that aims to segment the target object referred by a language expression in all video frames. In this work, we propose a simple and unified framework built upon Transformer, termed ReferFormer. It views the language as queries and directly attends to the most relevant regions in the video frames. Concretely, we introduce a small set of object queries conditioned on the language as the input to the Transformer. In this manner, all the queries are obligated to find the referred objects only. They are eventually transformed into dynamic kernels which capture the crucial object-level information, and play the role of convolution filters to generate the segmentation masks from feature maps. The object tracking is achieved naturally by linking the corresponding queries across frames. This mechanism greatly simplifies the pipeline and the endto-end framework is significantly different from the previous methods. Extensive experiments on Ref-Youtube-VOS, Ref-DAVIS17, A2D-Sentences and JHMDB-Sentences show the effectiveness of ReferFormer. On Ref-Youtube-VOS, ReferFormer achieves 55.6 J &F with a ResNet-50 backbone without bells and whistles, which exceeds the previous state-of-the-art performance by 8.4 points. In addition, with the strong Video-Swin-Base backbone, ReferFormer achieves the best J &F of 64.9 among all existing methods. Moreover, we show the impressive results of 55.0 mAP and 43.7 mAP on A2D-Sentences and JHMDB-Sentences respectively, which significantly outperforms the previous methods by a large margin.
Persistent Identifierhttp://hdl.handle.net/10722/315549

 

DC FieldValueLanguage
dc.contributor.authorWu, J-
dc.contributor.authorJiang, Y-
dc.contributor.authorSun, P-
dc.contributor.authorYuan, Z-
dc.contributor.authorLuo, P-
dc.date.accessioned2022-08-19T08:59:57Z-
dc.date.available2022-08-19T08:59:57Z-
dc.date.issued2022-
dc.identifier.citationIEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (Virtual), New Orleans, Louisiana, USA, 19-24 June, 2022. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, p. 4974-4984-
dc.identifier.urihttp://hdl.handle.net/10722/315549-
dc.description.abstractReferring video object segmentation (R-VOS) is an emerging cross-modal task that aims to segment the target object referred by a language expression in all video frames. In this work, we propose a simple and unified framework built upon Transformer, termed ReferFormer. It views the language as queries and directly attends to the most relevant regions in the video frames. Concretely, we introduce a small set of object queries conditioned on the language as the input to the Transformer. In this manner, all the queries are obligated to find the referred objects only. They are eventually transformed into dynamic kernels which capture the crucial object-level information, and play the role of convolution filters to generate the segmentation masks from feature maps. The object tracking is achieved naturally by linking the corresponding queries across frames. This mechanism greatly simplifies the pipeline and the endto-end framework is significantly different from the previous methods. Extensive experiments on Ref-Youtube-VOS, Ref-DAVIS17, A2D-Sentences and JHMDB-Sentences show the effectiveness of ReferFormer. On Ref-Youtube-VOS, ReferFormer achieves 55.6 J &F with a ResNet-50 backbone without bells and whistles, which exceeds the previous state-of-the-art performance by 8.4 points. In addition, with the strong Video-Swin-Base backbone, ReferFormer achieves the best J &F of 64.9 among all existing methods. Moreover, we show the impressive results of 55.0 mAP and 43.7 mAP on A2D-Sentences and JHMDB-Sentences respectively, which significantly outperforms the previous methods by a large margin.-
dc.languageeng-
dc.publisherIEEE Computer Society.-
dc.relation.ispartofProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022-
dc.rightsProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022. Copyright © IEEE Computer Society.-
dc.titleLanguage as queries for referring video object segmentation-
dc.typeConference_Paper-
dc.identifier.emailLuo, P: pluo@hku.hk-
dc.identifier.authorityLuo, P=rp02575-
dc.identifier.hkuros335581-
dc.identifier.spage4974-
dc.identifier.epage4984-
dc.publisher.placeUnited States-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats