File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1016/j.media.2024.103310
- Scopus: eid_2-s2.0-85201877388
- PMID: 39182302
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: MA-SAM: Modality-agnostic SAM adaptation for 3D medical image segmentation
Title | MA-SAM: Modality-agnostic SAM adaptation for 3D medical image segmentation |
---|---|
Authors | |
Keywords | Foundation model Medical image segmentation Segment anything |
Issue Date | 2024 |
Citation | Medical Image Analysis, 2024, v. 98, article no. 103310 How to Cite? |
Abstract | The Segment Anything Model (SAM), a foundation model for general image segmentation, has demonstrated impressive zero-shot performance across numerous natural image segmentation tasks. However, SAM's performance significantly declines when applied to medical images, primarily due to the substantial disparity between natural and medical image domains. To effectively adapt SAM to medical images, it is important to incorporate critical third-dimensional information, i.e., volumetric or temporal knowledge, during fine-tuning. Simultaneously, we aim to harness SAM's pre-trained weights within its original 2D backbone to the fullest extent. In this paper, we introduce a modality-agnostic SAM adaptation framework, named as MA-SAM, that is applicable to various volumetric and video medical data. Our method roots in the parameter-efficient fine-tuning strategy to update only a small portion of weight increments while preserving the majority of SAM's pre-trained weights. By injecting a series of 3D adapters into the transformer blocks of the image encoder, our method enables the pre-trained 2D backbone to extract third-dimensional information from input data. We comprehensively evaluate our method on five medical image segmentation tasks, by using 11 public datasets across CT, MRI, and surgical video data. Remarkably, without using any prompt, our method consistently outperforms various state-of-the-art 3D approaches, surpassing nnU-Net by 0.9%, 2.6%, and 9.9% in Dice for CT multi-organ segmentation, MRI prostate segmentation, and surgical scene segmentation respectively. Our model also demonstrates strong generalization, and excels in challenging tumor segmentation when prompts are used. Our code is available at: https://github.com/cchen-cc/MA-SAM. |
Persistent Identifier | http://hdl.handle.net/10722/349214 |
ISSN | 2023 Impact Factor: 10.7 2023 SCImago Journal Rankings: 4.112 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Chen, Cheng | - |
dc.contributor.author | Miao, Juzheng | - |
dc.contributor.author | Wu, Dufan | - |
dc.contributor.author | Zhong, Aoxiao | - |
dc.contributor.author | Yan, Zhiling | - |
dc.contributor.author | Kim, Sekeun | - |
dc.contributor.author | Hu, Jiang | - |
dc.contributor.author | Liu, Zhengliang | - |
dc.contributor.author | Sun, Lichao | - |
dc.contributor.author | Li, Xiang | - |
dc.contributor.author | Liu, Tianming | - |
dc.contributor.author | Heng, Pheng Ann | - |
dc.contributor.author | Li, Quanzheng | - |
dc.date.accessioned | 2024-10-17T06:57:02Z | - |
dc.date.available | 2024-10-17T06:57:02Z | - |
dc.date.issued | 2024 | - |
dc.identifier.citation | Medical Image Analysis, 2024, v. 98, article no. 103310 | - |
dc.identifier.issn | 1361-8415 | - |
dc.identifier.uri | http://hdl.handle.net/10722/349214 | - |
dc.description.abstract | The Segment Anything Model (SAM), a foundation model for general image segmentation, has demonstrated impressive zero-shot performance across numerous natural image segmentation tasks. However, SAM's performance significantly declines when applied to medical images, primarily due to the substantial disparity between natural and medical image domains. To effectively adapt SAM to medical images, it is important to incorporate critical third-dimensional information, i.e., volumetric or temporal knowledge, during fine-tuning. Simultaneously, we aim to harness SAM's pre-trained weights within its original 2D backbone to the fullest extent. In this paper, we introduce a modality-agnostic SAM adaptation framework, named as MA-SAM, that is applicable to various volumetric and video medical data. Our method roots in the parameter-efficient fine-tuning strategy to update only a small portion of weight increments while preserving the majority of SAM's pre-trained weights. By injecting a series of 3D adapters into the transformer blocks of the image encoder, our method enables the pre-trained 2D backbone to extract third-dimensional information from input data. We comprehensively evaluate our method on five medical image segmentation tasks, by using 11 public datasets across CT, MRI, and surgical video data. Remarkably, without using any prompt, our method consistently outperforms various state-of-the-art 3D approaches, surpassing nnU-Net by 0.9%, 2.6%, and 9.9% in Dice for CT multi-organ segmentation, MRI prostate segmentation, and surgical scene segmentation respectively. Our model also demonstrates strong generalization, and excels in challenging tumor segmentation when prompts are used. Our code is available at: https://github.com/cchen-cc/MA-SAM. | - |
dc.language | eng | - |
dc.relation.ispartof | Medical Image Analysis | - |
dc.subject | Foundation model | - |
dc.subject | Medical image segmentation | - |
dc.subject | Segment anything | - |
dc.title | MA-SAM: Modality-agnostic SAM adaptation for 3D medical image segmentation | - |
dc.type | Article | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1016/j.media.2024.103310 | - |
dc.identifier.pmid | 39182302 | - |
dc.identifier.scopus | eid_2-s2.0-85201877388 | - |
dc.identifier.volume | 98 | - |
dc.identifier.spage | article no. 103310 | - |
dc.identifier.epage | article no. 103310 | - |
dc.identifier.eissn | 1361-8423 | - |