File Download

There are no files associated with this item.

Supplementary

Conference Paper: Attribute Surrogates Learning and Spectral Tokens Pooling in Transformers for Few-shot Learning

TitleAttribute Surrogates Learning and Spectral Tokens Pooling in Transformers for Few-shot Learning
Authors
Issue Date2022
PublisherIEEE Computer Society.
Citation
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, Louisiana, USA, June 19-24, 2022 How to Cite?
AbstractThis paper presents new hierarchically cascaded transformers that can improve data efficiency through attribute surrogates learning and spectral tokens pooling. Vision transformers have recently been thought of as a promising alternative to convolutional neural networks for visual recognition. But when there is no sufficient data, it gets stuck in overfitting and shows inferior performance. To improve data efficiency, we propose hierarchically cascaded transformers that exploit intrinsic image structures through spectral tokens pooling and optimize the learnable parameters through latent attribute surrogates. The intrinsic image structure is utilized to reduce the ambiguity between foreground content and background noise by spectral tokens pooling. And the attribute surrogate learning scheme is designed to benefit from the rich visual information in image-label pairs instead of simple visual concepts assigned by their labels. Our Hierarchically Cascaded Transformers, called HCTransformers, is built upon a self-supervised learning framework DINO and is tested on several popular few-shot learning benchmarks. In the inductive setting, HCTransformers surpass the DINO baseline by a large margin of 9.7% 5-way 1-shot accuracy and 9.17% 5-way 5-shot accuracy on miniImageNet, which demonstrates HCTransformers are efficient to extract discriminative features. Also, HCTransformers show clear advantages over SOTA few-shot classification methods in both 5-way 1-shot and 5-way 5-shot settings on four popular benchmark datasets, including miniImageNet, tieredImageNet, FC100, and CIFAR-FS.
Persistent Identifierhttp://hdl.handle.net/10722/316356

 

DC FieldValueLanguage
dc.contributor.authorHe, Y-
dc.contributor.authorLiang, W-
dc.contributor.authorZhao, D-
dc.contributor.authorZhou, H-
dc.contributor.authorGe, W-
dc.contributor.authorYu, Y-
dc.contributor.authorZhang, W-
dc.date.accessioned2022-09-02T06:10:02Z-
dc.date.available2022-09-02T06:10:02Z-
dc.date.issued2022-
dc.identifier.citationIEEE Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, Louisiana, USA, June 19-24, 2022-
dc.identifier.urihttp://hdl.handle.net/10722/316356-
dc.description.abstractThis paper presents new hierarchically cascaded transformers that can improve data efficiency through attribute surrogates learning and spectral tokens pooling. Vision transformers have recently been thought of as a promising alternative to convolutional neural networks for visual recognition. But when there is no sufficient data, it gets stuck in overfitting and shows inferior performance. To improve data efficiency, we propose hierarchically cascaded transformers that exploit intrinsic image structures through spectral tokens pooling and optimize the learnable parameters through latent attribute surrogates. The intrinsic image structure is utilized to reduce the ambiguity between foreground content and background noise by spectral tokens pooling. And the attribute surrogate learning scheme is designed to benefit from the rich visual information in image-label pairs instead of simple visual concepts assigned by their labels. Our Hierarchically Cascaded Transformers, called HCTransformers, is built upon a self-supervised learning framework DINO and is tested on several popular few-shot learning benchmarks. In the inductive setting, HCTransformers surpass the DINO baseline by a large margin of 9.7% 5-way 1-shot accuracy and 9.17% 5-way 5-shot accuracy on miniImageNet, which demonstrates HCTransformers are efficient to extract discriminative features. Also, HCTransformers show clear advantages over SOTA few-shot classification methods in both 5-way 1-shot and 5-way 5-shot settings on four popular benchmark datasets, including miniImageNet, tieredImageNet, FC100, and CIFAR-FS.-
dc.languageeng-
dc.publisherIEEE Computer Society.-
dc.rightsCopyright © IEEE Computer Society.-
dc.titleAttribute Surrogates Learning and Spectral Tokens Pooling in Transformers for Few-shot Learning-
dc.typeConference_Paper-
dc.identifier.emailYu, Y: yzyu@cs.hku.hk-
dc.identifier.authorityYu, Y=rp01415-
dc.identifier.hkuros336337-
dc.publisher.placeUnited States-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats