File Download

There are no files associated with this item.

Supplementary

Conference Paper: Davit: Dual attention vision transformers

TitleDavit: Dual attention vision transformers
Authors
Issue Date2022
PublisherOrtra Ltd..
Citation
European Conference on Computer Vision (ECCV), Tel Aviv, Israel, October 23-27, 2022 How to Cite?
AbstractIn this work, we introduce Dual Attention Vision Transformers (DaViT), a simple yet effective vision transformer architecture that is able to capture global context while maintaining computational efficiency. We propose approaching the problem from an orthogonal angle: exploiting self-attention mechanisms with both 'spatial tokens' and 'channel tokens'. With spatial tokens, the spatial dimension defines the token scope, and the channel dimension defines the token feature dimension. With channel tokens, we have the inverse: the channel dimension defines the token scope, and the spatial dimension defines the token feature dimension. We further group tokens along the sequence direction for both spatial and channel tokens to maintain the linear complexity of the entire model. We show that these two self-attentions complement each other: (i) since each channel token contains an abstract representation of the entire image, the channel attention naturally captures global interactions and representations by taking all spatial positions into account when computing attention scores between channels; (ii) the spatial attention refines the local representations by performing fine-grained interactions across spatial locations, which in turn helps the global information modeling in channel attention. Extensive experiments show our DaViT achieves state-of-the-art performance on four different tasks with efficient computations. Without extra data, DaViT-Tiny, DaViT-Small, and DaViT-Base achieve 82.8%, 84.2%, and 84.6% top-1 accuracy on ImageNet-1K with 28.3M, 49.7M, and 87.9M parameters, respectively. When we further scale up DaViT with 1.5B weakly supervised image and text pairs, DaViT-Gaint reaches 90.4% top-1 accuracy on ImageNet-1K.
DescriptionPoster no. 812
Persistent Identifierhttp://hdl.handle.net/10722/315794

 

DC FieldValueLanguage
dc.contributor.authorDing, M-
dc.contributor.authorXiao, B-
dc.contributor.authorCodella, N-
dc.contributor.authorLuo, P-
dc.contributor.authorYuan, L-
dc.date.accessioned2022-08-19T09:04:33Z-
dc.date.available2022-08-19T09:04:33Z-
dc.date.issued2022-
dc.identifier.citationEuropean Conference on Computer Vision (ECCV), Tel Aviv, Israel, October 23-27, 2022-
dc.identifier.urihttp://hdl.handle.net/10722/315794-
dc.descriptionPoster no. 812-
dc.description.abstractIn this work, we introduce Dual Attention Vision Transformers (DaViT), a simple yet effective vision transformer architecture that is able to capture global context while maintaining computational efficiency. We propose approaching the problem from an orthogonal angle: exploiting self-attention mechanisms with both 'spatial tokens' and 'channel tokens'. With spatial tokens, the spatial dimension defines the token scope, and the channel dimension defines the token feature dimension. With channel tokens, we have the inverse: the channel dimension defines the token scope, and the spatial dimension defines the token feature dimension. We further group tokens along the sequence direction for both spatial and channel tokens to maintain the linear complexity of the entire model. We show that these two self-attentions complement each other: (i) since each channel token contains an abstract representation of the entire image, the channel attention naturally captures global interactions and representations by taking all spatial positions into account when computing attention scores between channels; (ii) the spatial attention refines the local representations by performing fine-grained interactions across spatial locations, which in turn helps the global information modeling in channel attention. Extensive experiments show our DaViT achieves state-of-the-art performance on four different tasks with efficient computations. Without extra data, DaViT-Tiny, DaViT-Small, and DaViT-Base achieve 82.8%, 84.2%, and 84.6% top-1 accuracy on ImageNet-1K with 28.3M, 49.7M, and 87.9M parameters, respectively. When we further scale up DaViT with 1.5B weakly supervised image and text pairs, DaViT-Gaint reaches 90.4% top-1 accuracy on ImageNet-1K.-
dc.languageeng-
dc.publisherOrtra Ltd..-
dc.titleDavit: Dual attention vision transformers-
dc.typeConference_Paper-
dc.identifier.emailLuo, P: pluo@hku.hk-
dc.identifier.authorityLuo, P=rp02575-
dc.identifier.hkuros335565-
dc.publisher.placeIsrael-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats