File Download

There are no files associated with this item.

Supplementary

Conference Paper: Masked Spiking Transformer

TitleMasked Spiking Transformer
Authors
Issue Date6-Oct-2023
Abstract

The combination of Spiking Neural Networks (SNNs) and Transformers has attracted significant attention due to their potential for high energy efficiency and high-performance nature. However, existing works on this topic typically rely on direct training, which can lead to subop-timal performance. To address this issue, we propose to leverage the benefits of the ANN-to-SNN conversion method to combine SNNs and Transformers, resulting in significantly improved performance over existing state-of-the-art SNN models. Furthermore, inspired by the quantal synap-tic failures observed in the nervous system, which reduce the number of spikes transmitted across synapses, we introduce a novel Masked Spiking Transformer (MST) framework. This incorporates a Random Spike Masking (RSM) method to prune redundant spikes and reduce energy consumption without sacrificing performance. Our experimental results demonstrate that the proposed MST model achieves a significant reduction of 26.8% in power consumption when the masking ratio is 75% while maintaining the same level of performance as the unmasked model. The code is available at: https://github.com/bic-L/ Masked-Spiking-Transformer.


Persistent Identifierhttp://hdl.handle.net/10722/339391

 

DC FieldValueLanguage
dc.contributor.authorWang, Ziqing-
dc.contributor.authorFang, Yuetong-
dc.contributor.authorCao, Jiahang-
dc.contributor.authorZhang, Qiang-
dc.contributor.authorWang, Zhongrui-
dc.contributor.authorXu, Renjing -
dc.date.accessioned2024-03-11T10:36:15Z-
dc.date.available2024-03-11T10:36:15Z-
dc.date.issued2023-10-06-
dc.identifier.urihttp://hdl.handle.net/10722/339391-
dc.description.abstract<p>The combination of Spiking Neural Networks (SNNs) and Transformers has attracted significant attention due to their potential for high energy efficiency and high-performance nature. However, existing works on this topic typically rely on direct training, which can lead to subop-timal performance. To address this issue, we propose to leverage the benefits of the ANN-to-SNN conversion method to combine SNNs and Transformers, resulting in significantly improved performance over existing state-of-the-art SNN models. Furthermore, inspired by the quantal synap-tic failures observed in the nervous system, which reduce the number of spikes transmitted across synapses, we introduce a novel Masked Spiking Transformer (MST) framework. This incorporates a Random Spike Masking (RSM) method to prune redundant spikes and reduce energy consumption without sacrificing performance. Our experimental results demonstrate that the proposed MST model achieves a significant reduction of 26.8% in power consumption when the masking ratio is 75% while maintaining the same level of performance as the unmasked model. The code is available at: https://github.com/bic-L/ Masked-Spiking-Transformer.</p>-
dc.languageeng-
dc.relation.ispartof2023 International Conference on Computer Vision (ICCV) (02/10/2023-06/10/2023, , , Paris)-
dc.titleMasked Spiking Transformer-
dc.typeConference_Paper-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats