File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: mCLIP: Multilingual CLIP via Cross-lingual Transfer

TitlemCLIP: Multilingual CLIP via Cross-lingual Transfer
Authors
Issue Date1-Jul-2023
Abstract

Large-scale vision-language pretrained (VLP) models like CLIP have shown remarkable performance on various downstream cross-modal tasks. However, they are usually biased towards English due to the lack of sufficient non-English image-text pairs. Existing multilingual VLP methods often learn retrieval-inefficient single-stream models by translation-augmented non-English image-text pairs. In this paper, we introduce mCLIP, a retrieval-efficient dual-stream multilingual VLP model, trained by aligning the CLIP model and a Multilingual Text Encoder (MTE) through a novel Triangle Cross-modal Knowledge Distillation (TriKD) method. It is parameter-efficient as only two light projectors on the top of them are updated during distillation. Furthermore, to enhance the token- and sentence-level multilingual representation of the MTE, we propose to train it with machine translation and contrastive learning jointly before the TriKD to provide a better initialization. Empirical results show that mCLIP achieves new state-of-the-art performance for both zero-shot and finetuned multilingual image-text retrieval task.


Persistent Identifierhttp://hdl.handle.net/10722/333844

 

DC FieldValueLanguage
dc.contributor.authorChen, Guanhua-
dc.contributor.authorHou, Lu-
dc.contributor.authorChen, Yun-
dc.contributor.authorDai, Wenliang-
dc.contributor.authorShang, Lifeng-
dc.contributor.authorJiang, Xin-
dc.contributor.authorLiu, Qun-
dc.contributor.authorPan, Jia-
dc.contributor.authorWang, Wenping-
dc.date.accessioned2023-10-06T08:39:33Z-
dc.date.available2023-10-06T08:39:33Z-
dc.date.issued2023-07-01-
dc.identifier.urihttp://hdl.handle.net/10722/333844-
dc.description.abstract<p>Large-scale vision-language pretrained (VLP) models like CLIP have shown remarkable performance on various downstream cross-modal tasks. However, they are usually biased towards English due to the lack of sufficient non-English image-text pairs. Existing multilingual VLP methods often learn retrieval-inefficient single-stream models by translation-augmented non-English image-text pairs. In this paper, we introduce mCLIP, a retrieval-efficient dual-stream multilingual VLP model, trained by aligning the CLIP model and a Multilingual Text Encoder (MTE) through a novel Triangle Cross-modal Knowledge Distillation (TriKD) method. It is parameter-efficient as only two light projectors on the top of them are updated during distillation. Furthermore, to enhance the token- and sentence-level multilingual representation of the MTE, we propose to train it with machine translation and contrastive learning jointly before the TriKD to provide a better initialization. Empirical results show that mCLIP achieves new state-of-the-art performance for both zero-shot and finetuned multilingual image-text retrieval task.</p>-
dc.languageeng-
dc.relation.ispartofAnnual Meeting of the Association for Computational Linguistics (ACL 2023) (11/07/2023-18/07/2023)-
dc.titlemCLIP: Multilingual CLIP via Cross-lingual Transfer-
dc.typeConference_Paper-
dc.identifier.doi10.18653/v1/2023.acl-long.728-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats