File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TIP.2023.3293771
- Scopus: eid_2-s2.0-85164776288
- PMID: 37440404
- WOS: WOS:001033515600010
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: nnFormer: Volumetric Medical Image Segmentation via a 3D Transformer
Title | nnFormer: Volumetric Medical Image Segmentation via a 3D Transformer |
---|---|
Authors | |
Keywords | attention mechanism Transformer volumetric image segmentation |
Issue Date | 13-Jul-2023 |
Publisher | Institute of Electrical and Electronics Engineers |
Citation | IEEE Transactions on Image Processing, 2023, v. 32, p. 4036-4045 How to Cite? |
Abstract | Transformer, the model of choice for natural language processing, has drawn scant attention from the medical imaging community. Given the ability to exploit long-term dependencies, transformers are promising to help atypical convolutional neural networks to learn more contextualized visual representations. However, most of recently proposed transformer-based segmentation approaches simply treated transformers as assisted modules to help encode global context into convolutional representations. To address this issue, we introduce nnFormer (i.e., not-another transFormer), a 3D transformer for volumetric medical image segmentation. nnFormer not only exploits the combination of interleaved convolution and self-attention operations, but also introduces local and global volume-based self-attention mechanism to learn volume representations. Moreover, nnFormer proposes to use skip attention to replace the traditional concatenation/summation operations in skip connections in U-Net like architecture. Experiments show that nnFormer significantly outperforms previous transformer-based counterparts by large margins on three public datasets. Compared to nnUNet, the most widely recognized convnet-based 3D medical segmentation model, nnFormer produces significantly lower HD95 and is much more computationally efficient. Furthermore, we show that nnFormer and nnUNet are highly complementary to each other in model ensembling. Codes and models of nnFormer are available at https://git.io/JSf3i. |
Persistent Identifier | http://hdl.handle.net/10722/331453 |
ISSN | 2021 Impact Factor: 11.041 2020 SCImago Journal Rankings: 1.778 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zhou, HY | - |
dc.contributor.author | Guo, JS | - |
dc.contributor.author | Zhang, YH | - |
dc.contributor.author | Han, XG | - |
dc.contributor.author | Yu, LQ | - |
dc.contributor.author | Wang, LS | - |
dc.contributor.author | Yu, YZ | - |
dc.date.accessioned | 2023-09-21T06:55:52Z | - |
dc.date.available | 2023-09-21T06:55:52Z | - |
dc.date.issued | 2023-07-13 | - |
dc.identifier.citation | IEEE Transactions on Image Processing, 2023, v. 32, p. 4036-4045 | - |
dc.identifier.issn | 1057-7149 | - |
dc.identifier.uri | http://hdl.handle.net/10722/331453 | - |
dc.description.abstract | <p>Transformer, the model of choice for natural language processing, has drawn scant attention from the medical imaging community. Given the ability to exploit long-term dependencies, transformers are promising to help atypical convolutional neural networks to learn more contextualized visual representations. However, most of recently proposed transformer-based segmentation approaches simply treated transformers as assisted modules to help encode global context into convolutional representations. To address this issue, we introduce nnFormer (i.e., not-another transFormer), a 3D transformer for volumetric medical image segmentation. nnFormer not only exploits the combination of interleaved convolution and self-attention operations, but also introduces local and global volume-based self-attention mechanism to learn volume representations. Moreover, nnFormer proposes to use skip attention to replace the traditional concatenation/summation operations in skip connections in U-Net like architecture. Experiments show that nnFormer significantly outperforms previous transformer-based counterparts by large margins on three public datasets. Compared to nnUNet, the most widely recognized convnet-based 3D medical segmentation model, nnFormer produces significantly lower HD95 and is much more computationally efficient. Furthermore, we show that nnFormer and nnUNet are highly complementary to each other in model ensembling. Codes and models of nnFormer are available at https://git.io/JSf3i.</p> | - |
dc.language | eng | - |
dc.publisher | Institute of Electrical and Electronics Engineers | - |
dc.relation.ispartof | IEEE Transactions on Image Processing | - |
dc.subject | attention mechanism | - |
dc.subject | Transformer | - |
dc.subject | volumetric image segmentation | - |
dc.title | nnFormer: Volumetric Medical Image Segmentation via a 3D Transformer | - |
dc.type | Article | - |
dc.identifier.doi | 10.1109/TIP.2023.3293771 | - |
dc.identifier.pmid | 37440404 | - |
dc.identifier.scopus | eid_2-s2.0-85164776288 | - |
dc.identifier.volume | 32 | - |
dc.identifier.spage | 4036 | - |
dc.identifier.epage | 4045 | - |
dc.identifier.eissn | 1941-0042 | - |
dc.identifier.isi | WOS:001033515600010 | - |
dc.publisher.place | PISCATAWAY | - |
dc.identifier.issnl | 1057-7149 | - |