File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: TransXNet: Learning Both Global and Local Dynamics With a Dual Dynamic Token Mixer for Visual Recognition

TitleTransXNet: Learning Both Global and Local Dynamics With a Dual Dynamic Token Mixer for Visual Recognition
Authors
KeywordsDual Dynamic Token Mixer
Vision Transformer
Visual recognition
Issue Date1-Jan-2025
PublisherInstitute of Electrical and Electronics Engineers
Citation
IEEE Transactions on Neural Networks and Learning Systems, 2025, v. 36, n. 6, p. 11534-11547 How to Cite?
Abstract

Recent studies have integrated convolutions into transformers to introduce inductive bias and improve generalization performance. However, the static nature of conventional convolution prevents it from dynamically adapting to input variations, resulting in a representation discrepancy between convolution and self-attention as self-attention calculates attention matrices dynamically. Furthermore, when stacking token mixers that consist of convolution and self-attention to form a deep network, the static nature of convolution hinders the fusion of features previously generated by self-attention into convolution kernels. These two limitations result in a suboptimal representation capacity of the constructed networks. To find a solution, we propose a lightweight dual dynamic token mixer (D-Mixer) to simultaneously learn global and local dynamics, that is, mechanisms that compute weights for aggregating global contexts and local details in an input-dependent manner. D-Mixer works by applying an efficient global attention module and an input-dependent depthwise convolution separately on evenly split feature segments, endowing the network with strong inductive bias and an enlarged effective receptive field. We use D-Mixer as the basic building block to design TransXNet, a novel hybrid CNN–transformer vision backbone network that delivers compelling performance. In the ImageNet-1K image classification task, TransXNet-T surpasses Swin-T by 0.3% in top-1 accuracy while requiring less than half of the computational cost. Furthermore, TransXNet-S and TransXNet-B exhibit excellent model scalability, achieving top-1 accuracy of 83.8% and 84.6%, respectively, with reasonable computational costs. In addition, our proposed network architecture demonstrates strong generalization capabilities in various dense prediction tasks, outperforming other state-of-the-art networks while having lower computational costs.


Persistent Identifierhttp://hdl.handle.net/10722/361929
ISSN
2023 Impact Factor: 10.2
2023 SCImago Journal Rankings: 4.170

 

DC FieldValueLanguage
dc.contributor.authorLou, Meng-
dc.contributor.authorZhang, Shu-
dc.contributor.authorZhou, Hong Yu-
dc.contributor.authorYang, Sibei-
dc.contributor.authorWu, Chuan-
dc.contributor.authorYu, Yizhou-
dc.date.accessioned2025-09-17T00:32:07Z-
dc.date.available2025-09-17T00:32:07Z-
dc.date.issued2025-01-01-
dc.identifier.citationIEEE Transactions on Neural Networks and Learning Systems, 2025, v. 36, n. 6, p. 11534-11547-
dc.identifier.issn2162-237X-
dc.identifier.urihttp://hdl.handle.net/10722/361929-
dc.description.abstract<p>Recent studies have integrated convolutions into transformers to introduce inductive bias and improve generalization performance. However, the static nature of conventional convolution prevents it from dynamically adapting to input variations, resulting in a representation discrepancy between convolution and self-attention as self-attention calculates attention matrices dynamically. Furthermore, when stacking token mixers that consist of convolution and self-attention to form a deep network, the static nature of convolution hinders the fusion of features previously generated by self-attention into convolution kernels. These two limitations result in a suboptimal representation capacity of the constructed networks. To find a solution, we propose a lightweight dual dynamic token mixer (D-Mixer) to simultaneously learn global and local dynamics, that is, mechanisms that compute weights for aggregating global contexts and local details in an input-dependent manner. D-Mixer works by applying an efficient global attention module and an input-dependent depthwise convolution separately on evenly split feature segments, endowing the network with strong inductive bias and an enlarged effective receptive field. We use D-Mixer as the basic building block to design TransXNet, a novel hybrid CNN–transformer vision backbone network that delivers compelling performance. In the ImageNet-1K image classification task, TransXNet-T surpasses Swin-T by 0.3% in top-1 accuracy while requiring less than half of the computational cost. Furthermore, TransXNet-S and TransXNet-B exhibit excellent model scalability, achieving top-1 accuracy of 83.8% and 84.6%, respectively, with reasonable computational costs. In addition, our proposed network architecture demonstrates strong generalization capabilities in various dense prediction tasks, outperforming other state-of-the-art networks while having lower computational costs.</p>-
dc.languageeng-
dc.publisherInstitute of Electrical and Electronics Engineers-
dc.relation.ispartofIEEE Transactions on Neural Networks and Learning Systems-
dc.subjectDual Dynamic Token Mixer-
dc.subjectVision Transformer-
dc.subjectVisual recognition-
dc.titleTransXNet: Learning Both Global and Local Dynamics With a Dual Dynamic Token Mixer for Visual Recognition-
dc.typeArticle-
dc.identifier.doi10.1109/TNNLS.2025.3550979-
dc.identifier.pmid40178959-
dc.identifier.scopuseid_2-s2.0-105002250309-
dc.identifier.volume36-
dc.identifier.issue6-
dc.identifier.spage11534-
dc.identifier.epage11547-
dc.identifier.eissn2162-2388-
dc.identifier.issnl2162-237X-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats