File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: nnFormer: Volumetric Medical Image Segmentation via a 3D Transformer

TitlennFormer: Volumetric Medical Image Segmentation via a 3D Transformer
Authors
Keywordsattention mechanism
Transformer
volumetric image segmentation
Issue Date13-Jul-2023
PublisherInstitute of Electrical and Electronics Engineers
Citation
IEEE Transactions on Image Processing, 2023, v. 32, p. 4036-4045 How to Cite?
Abstract

Transformer, the model of choice for natural language processing, has drawn scant attention from the medical imaging community. Given the ability to exploit long-term dependencies, transformers are promising to help atypical convolutional neural networks to learn more contextualized visual representations. However, most of recently proposed transformer-based segmentation approaches simply treated transformers as assisted modules to help encode global context into convolutional representations. To address this issue, we introduce nnFormer (i.e., not-another transFormer), a 3D transformer for volumetric medical image segmentation. nnFormer not only exploits the combination of interleaved convolution and self-attention operations, but also introduces local and global volume-based self-attention mechanism to learn volume representations. Moreover, nnFormer proposes to use skip attention to replace the traditional concatenation/summation operations in skip connections in U-Net like architecture. Experiments show that nnFormer significantly outperforms previous transformer-based counterparts by large margins on three public datasets. Compared to nnUNet, the most widely recognized convnet-based 3D medical segmentation model, nnFormer produces significantly lower HD95 and is much more computationally efficient. Furthermore, we show that nnFormer and nnUNet are highly complementary to each other in model ensembling. Codes and models of nnFormer are available at https://git.io/JSf3i.


Persistent Identifierhttp://hdl.handle.net/10722/331453
ISSN
2021 Impact Factor: 11.041
2020 SCImago Journal Rankings: 1.778
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorZhou, HY-
dc.contributor.authorGuo, JS-
dc.contributor.authorZhang, YH-
dc.contributor.authorHan, XG-
dc.contributor.authorYu, LQ-
dc.contributor.authorWang, LS-
dc.contributor.authorYu, YZ-
dc.date.accessioned2023-09-21T06:55:52Z-
dc.date.available2023-09-21T06:55:52Z-
dc.date.issued2023-07-13-
dc.identifier.citationIEEE Transactions on Image Processing, 2023, v. 32, p. 4036-4045-
dc.identifier.issn1057-7149-
dc.identifier.urihttp://hdl.handle.net/10722/331453-
dc.description.abstract<p>Transformer, the model of choice for natural language processing, has drawn scant attention from the medical imaging community. Given the ability to exploit long-term dependencies, transformers are promising to help atypical convolutional neural networks to learn more contextualized visual representations. However, most of recently proposed transformer-based segmentation approaches simply treated transformers as assisted modules to help encode global context into convolutional representations. To address this issue, we introduce nnFormer (i.e., not-another transFormer), a 3D transformer for volumetric medical image segmentation. nnFormer not only exploits the combination of interleaved convolution and self-attention operations, but also introduces local and global volume-based self-attention mechanism to learn volume representations. Moreover, nnFormer proposes to use skip attention to replace the traditional concatenation/summation operations in skip connections in U-Net like architecture. Experiments show that nnFormer significantly outperforms previous transformer-based counterparts by large margins on three public datasets. Compared to nnUNet, the most widely recognized convnet-based 3D medical segmentation model, nnFormer produces significantly lower HD95 and is much more computationally efficient. Furthermore, we show that nnFormer and nnUNet are highly complementary to each other in model ensembling. Codes and models of nnFormer are available at https://git.io/JSf3i.</p>-
dc.languageeng-
dc.publisherInstitute of Electrical and Electronics Engineers-
dc.relation.ispartofIEEE Transactions on Image Processing-
dc.subjectattention mechanism-
dc.subjectTransformer-
dc.subjectvolumetric image segmentation-
dc.titlennFormer: Volumetric Medical Image Segmentation via a 3D Transformer-
dc.typeArticle-
dc.identifier.doi10.1109/TIP.2023.3293771-
dc.identifier.pmid37440404-
dc.identifier.scopuseid_2-s2.0-85164776288-
dc.identifier.volume32-
dc.identifier.spage4036-
dc.identifier.epage4045-
dc.identifier.eissn1941-0042-
dc.identifier.isiWOS:001033515600010-
dc.publisher.placePISCATAWAY-
dc.identifier.issnl1057-7149-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats