File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TCSVT.2025.3563411
- Scopus: eid_2-s2.0-105003647979
- Find via

Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: Vivim: a Video Vision Mamba for Ultrasound Video Segmentation
| Title | Vivim: a Video Vision Mamba for Ultrasound Video Segmentation |
|---|---|
| Authors | |
| Keywords | Breast lesion segmentation polyp segmentation State space model Thyroid segmentation Ultrasound videos |
| Issue Date | 1-Jan-2025 |
| Publisher | Institute of Electrical and Electronics Engineers |
| Citation | IEEE Transactions on Circuits and Systems for Video Technology, 2025 How to Cite? |
| Abstract | Ultrasound video segmentation gains increasing attention in clinical practice due to the redundant dynamic references in video frames. However, traditional convolutional neural networks have a limited receptive field and transformer-based networks are unsatisfactory in constructing long-term dependency from the perspective of computational complexity. This bottleneck poses a significant challenge when processing longer sequences in medical video analysis tasks using available devices with limited memory. Recently, state space models (SSMs), famous by Mamba, have exhibited linear complexity and impressive achievements in efficient long sequence modeling, which have developed deep neural networks by expanding the receptive field on many vision tasks significantly. Unfortunately, vanilla SSMs failed to simultaneously capture causal temporal cues and preserve non-casual spatial information. To this end, this paper presents a Video Vision Mamba-based framework, dubbed as Vivim, for ultrasound video segmentation tasks. Our Vivim can effectively compress the long-term spatiotemporal representation into sequences at varying scales with our designed Temporal Mamba Block. We also introduce an improved boundary-aware affine constraint across frames to enhance the discriminative ability of Vivim on ambiguous lesions. Extensive experiments on thyroid segmentation in ultrasound videos, breast lesion segmentation in ultrasound videos, and polyp segmentation in colonoscopy videos demonstrate the effectiveness and efficiency of our Vivim, superior to existing methods. |
| Persistent Identifier | http://hdl.handle.net/10722/362615 |
| ISSN | 2023 Impact Factor: 8.3 2023 SCImago Journal Rankings: 2.299 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Yang, Yijun | - |
| dc.contributor.author | Xing, Zhaohu | - |
| dc.contributor.author | Yu, Lequan | - |
| dc.contributor.author | Fu, Huazhu | - |
| dc.contributor.author | Huang, Chunwang | - |
| dc.contributor.author | Zhu, Lei | - |
| dc.date.accessioned | 2025-09-26T00:36:28Z | - |
| dc.date.available | 2025-09-26T00:36:28Z | - |
| dc.date.issued | 2025-01-01 | - |
| dc.identifier.citation | IEEE Transactions on Circuits and Systems for Video Technology, 2025 | - |
| dc.identifier.issn | 1051-8215 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/362615 | - |
| dc.description.abstract | <p>Ultrasound video segmentation gains increasing attention in clinical practice due to the redundant dynamic references in video frames. However, traditional convolutional neural networks have a limited receptive field and transformer-based networks are unsatisfactory in constructing long-term dependency from the perspective of computational complexity. This bottleneck poses a significant challenge when processing longer sequences in medical video analysis tasks using available devices with limited memory. Recently, state space models (SSMs), famous by Mamba, have exhibited linear complexity and impressive achievements in efficient long sequence modeling, which have developed deep neural networks by expanding the receptive field on many vision tasks significantly. Unfortunately, vanilla SSMs failed to simultaneously capture causal temporal cues and preserve non-casual spatial information. To this end, this paper presents a Video Vision Mamba-based framework, dubbed as Vivim, for ultrasound video segmentation tasks. Our Vivim can effectively compress the long-term spatiotemporal representation into sequences at varying scales with our designed Temporal Mamba Block. We also introduce an improved boundary-aware affine constraint across frames to enhance the discriminative ability of Vivim on ambiguous lesions. Extensive experiments on thyroid segmentation in ultrasound videos, breast lesion segmentation in ultrasound videos, and polyp segmentation in colonoscopy videos demonstrate the effectiveness and efficiency of our Vivim, superior to existing methods.</p> | - |
| dc.language | eng | - |
| dc.publisher | Institute of Electrical and Electronics Engineers | - |
| dc.relation.ispartof | IEEE Transactions on Circuits and Systems for Video Technology | - |
| dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
| dc.subject | Breast lesion segmentation | - |
| dc.subject | polyp segmentation | - |
| dc.subject | State space model | - |
| dc.subject | Thyroid segmentation | - |
| dc.subject | Ultrasound videos | - |
| dc.title | Vivim: a Video Vision Mamba for Ultrasound Video Segmentation | - |
| dc.type | Article | - |
| dc.identifier.doi | 10.1109/TCSVT.2025.3563411 | - |
| dc.identifier.scopus | eid_2-s2.0-105003647979 | - |
| dc.identifier.eissn | 1558-2205 | - |
| dc.identifier.issnl | 1051-8215 | - |
