File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1609/aaai.v38i6.28465
- Scopus: eid_2-s2.0-85189525887
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Referred by Multi-Modality: A Unified Temporal Transformer for Video Object Segmentation
Title | Referred by Multi-Modality: A Unified Temporal Transformer for Video Object Segmentation |
---|---|
Authors | |
Issue Date | 2024 |
Citation | Proceedings of the AAAI Conference on Artificial Intelligence, 2024, v. 38, n. 6, p. 6449-6457 How to Cite? |
Abstract | Recently, video object segmentation (VOS) referred by multimodal signals, e.g., language and audio, has evoked increasing attention in both industry and academia. It is challenging for exploring the semantic alignment within modalities and the visual correspondence across frames. However, existing methods adopt separate network architectures for different modalities, and neglect the inter-frame temporal interaction with references. In this paper, we propose MUTR, a Multi-modal Unified Temporal transformer for Referring video object segmentation. With a unified framework for the first time, MUTR adopts a DETR-style transformer and is capable of segmenting video objects designated by either text or audio reference. Specifically, we introduce two strategies to fully explore the temporal relations between videos and multi-modal signals. Firstly, for low-level temporal aggregation before the transformer, we enable the multi-modal references to capture multi-scale visual cues from consecutive video frames. This effectively endows the text or audio signals with temporal knowledge and boosts the semantic alignment between modalities. Secondly, for high-level temporal interaction after the transformer, we conduct inter-frame feature communication for different object embeddings, contributing to better object-wise correspondence for tracking along the video. On Ref-YouTube-VOS and AVSBench datasets with respective text and audio references, MUTR achieves +4.2% and +8.7% J &F improvements to state-of-the-art methods, demonstrating our significance for unified multi-modal VOS. Code is released at https://github.com/OpenGVLab/MUTR. |
Persistent Identifier | http://hdl.handle.net/10722/351497 |
ISSN |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Yan, Shilin | - |
dc.contributor.author | Zhang, Renrui | - |
dc.contributor.author | Guo, Ziyu | - |
dc.contributor.author | Chen, Wenchao | - |
dc.contributor.author | Zhang, Wei | - |
dc.contributor.author | Li, Hongyang | - |
dc.contributor.author | Qiao, Yu | - |
dc.contributor.author | Dong, Hao | - |
dc.contributor.author | He, Zhongjiang | - |
dc.contributor.author | Gao, Peng | - |
dc.date.accessioned | 2024-11-20T03:56:44Z | - |
dc.date.available | 2024-11-20T03:56:44Z | - |
dc.date.issued | 2024 | - |
dc.identifier.citation | Proceedings of the AAAI Conference on Artificial Intelligence, 2024, v. 38, n. 6, p. 6449-6457 | - |
dc.identifier.issn | 2159-5399 | - |
dc.identifier.uri | http://hdl.handle.net/10722/351497 | - |
dc.description.abstract | Recently, video object segmentation (VOS) referred by multimodal signals, e.g., language and audio, has evoked increasing attention in both industry and academia. It is challenging for exploring the semantic alignment within modalities and the visual correspondence across frames. However, existing methods adopt separate network architectures for different modalities, and neglect the inter-frame temporal interaction with references. In this paper, we propose MUTR, a Multi-modal Unified Temporal transformer for Referring video object segmentation. With a unified framework for the first time, MUTR adopts a DETR-style transformer and is capable of segmenting video objects designated by either text or audio reference. Specifically, we introduce two strategies to fully explore the temporal relations between videos and multi-modal signals. Firstly, for low-level temporal aggregation before the transformer, we enable the multi-modal references to capture multi-scale visual cues from consecutive video frames. This effectively endows the text or audio signals with temporal knowledge and boosts the semantic alignment between modalities. Secondly, for high-level temporal interaction after the transformer, we conduct inter-frame feature communication for different object embeddings, contributing to better object-wise correspondence for tracking along the video. On Ref-YouTube-VOS and AVSBench datasets with respective text and audio references, MUTR achieves +4.2% and +8.7% J &F improvements to state-of-the-art methods, demonstrating our significance for unified multi-modal VOS. Code is released at https://github.com/OpenGVLab/MUTR. | - |
dc.language | eng | - |
dc.relation.ispartof | Proceedings of the AAAI Conference on Artificial Intelligence | - |
dc.title | Referred by Multi-Modality: A Unified Temporal Transformer for Video Object Segmentation | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1609/aaai.v38i6.28465 | - |
dc.identifier.scopus | eid_2-s2.0-85189525887 | - |
dc.identifier.volume | 38 | - |
dc.identifier.issue | 6 | - |
dc.identifier.spage | 6449 | - |
dc.identifier.epage | 6457 | - |
dc.identifier.eissn | 2374-3468 | - |