File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
  • Find via Find It@HKUL
Supplementary

Conference Paper: Vlmixer: Unpaired vision-language pre-training via cross-modal cutmix

TitleVlmixer: Unpaired vision-language pre-training via cross-modal cutmix
Authors
Issue Date2022
PublisherML Research Press.
Citation
International Conference on Machine Learning (ICML) (Virtual), Baltimore, Maryland, USA, 2022. In Proceedings of the 39th International Conference on Machine Learning (PMLR), v. 162, p. 22680-22690 How to Cite?
AbstractExisting vision-language pre-training (VLP) methods primarily rely on paired image-text datasets, which are either annotated by enormous human labors or crawled from the internet followed by elaborate data cleaning techniques. To reduce the dependency on well-aligned image-text pairs, it is promising to directly leverage the large-scale text-only and image-only corpora. This paper proposes a data augmentation method, namely cross-modal CutMix (CMC), for implicit cross-modal alignment learning in unpaired VLP. Specifically, CMC transforms natural sentences in the textual view into a multi-modal view, where visually-grounded words in a sentence are randomly replaced by diverse image patches with similar semantics. There are several appealing proprieties of the proposed CMC. First, it enhances the data diversity while keeping the semantic meaning intact for tackling problems where the aligned data are scarce; Second, by attaching cross-modal noise on uni-modal data, it guides models to learn token-level interactions across modalities for better denoising. Furthermore, we present a new unpaired VLP method, dubbed as VLMixer, that integrates CMC with contrastive learning to pull together the uni-modal and multi-modal views for better instance-level alignments among different modalities. Extensive experiments on five downstream tasks show that VLMixer could surpass previous state-of-the-art unpaired VLP methods.
Persistent Identifierhttp://hdl.handle.net/10722/315548
ISSN

 

DC FieldValueLanguage
dc.contributor.authorWang, T-
dc.contributor.authorJiang, W-
dc.contributor.authorLu, Z-
dc.contributor.authorZheng, F-
dc.contributor.authorCheng, R-
dc.contributor.authorYin, C-
dc.contributor.authorLuo, P-
dc.date.accessioned2022-08-19T08:59:56Z-
dc.date.available2022-08-19T08:59:56Z-
dc.date.issued2022-
dc.identifier.citationInternational Conference on Machine Learning (ICML) (Virtual), Baltimore, Maryland, USA, 2022. In Proceedings of the 39th International Conference on Machine Learning (PMLR), v. 162, p. 22680-22690-
dc.identifier.issn1938-7228-
dc.identifier.urihttp://hdl.handle.net/10722/315548-
dc.description.abstractExisting vision-language pre-training (VLP) methods primarily rely on paired image-text datasets, which are either annotated by enormous human labors or crawled from the internet followed by elaborate data cleaning techniques. To reduce the dependency on well-aligned image-text pairs, it is promising to directly leverage the large-scale text-only and image-only corpora. This paper proposes a data augmentation method, namely cross-modal CutMix (CMC), for implicit cross-modal alignment learning in unpaired VLP. Specifically, CMC transforms natural sentences in the textual view into a multi-modal view, where visually-grounded words in a sentence are randomly replaced by diverse image patches with similar semantics. There are several appealing proprieties of the proposed CMC. First, it enhances the data diversity while keeping the semantic meaning intact for tackling problems where the aligned data are scarce; Second, by attaching cross-modal noise on uni-modal data, it guides models to learn token-level interactions across modalities for better denoising. Furthermore, we present a new unpaired VLP method, dubbed as VLMixer, that integrates CMC with contrastive learning to pull together the uni-modal and multi-modal views for better instance-level alignments among different modalities. Extensive experiments on five downstream tasks show that VLMixer could surpass previous state-of-the-art unpaired VLP methods.-
dc.languageeng-
dc.publisherML Research Press.-
dc.relation.ispartofProceedings of the 39th International Conference on Machine Learning (PMLR)-
dc.titleVlmixer: Unpaired vision-language pre-training via cross-modal cutmix-
dc.typeConference_Paper-
dc.identifier.emailLuo, P: pluo@hku.hk-
dc.identifier.authorityLuo, P=rp02575-
dc.identifier.hkuros335579-
dc.identifier.volume162-
dc.identifier.spage22680-
dc.identifier.epage22690-
dc.publisher.placeNetherlands-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats