File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Automated Microbubble Discrimination in Ultrasound Localization Microscopy by Vision Transformer

TitleAutomated Microbubble Discrimination in Ultrasound Localization Microscopy by Vision Transformer
Authors
Issue Date12-May-2025
PublisherInstitute of Electrical and Electronics Engineers
Citation
IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, 2025 How to Cite?
Abstract

Ultrasound localization microscopy (ULM) has revolutionized microvascular imaging by breaking the acoustic diffraction limit. However, different ULM workflows depend heavily on distinct prior knowledge, such as the impulse response and empirical selection of parameters (e.g., the number of microbubbles (MBs) per frame M), or the consistency of training-test dataset in deep learning (DL)-based studies. We hereby propose a general ULM pipeline that reduces priors. Our approach leverages a DL model that simultaneously distills microbubble signals and reduces speckle from every frame without estimating the impulse response and M. Our method features an efficient channel attention vision transformer (ViT) and a progressive learning strategy, enabling it to learn global information through training on progressively increasing patch sizes. Ample synthetic data were generated using the k-Wave toolbox to simulate various MB patterns, thus overcoming the deficiency of labeled data. The ViT output was further processed by a standard radial symmetry method for sub-pixel localization. Our method performed well on model-unseen public datasets: one in silico dataset with ground truth and four in vivo datasets of mouse tumor, rat brain, rat brain bolus, and rat kidney. Our pipeline outperformed conventional ULM, achieving higher positive predictive values (precision in DL, 0.88-0.41 vs. 0.83-0.16) and improved accuracy (root-mean-square errors: 0.25-0.14 λ vs. 0.31-0.13 λ) across a range of signal-to-noise ratios from 60 dB to 10 dB. Our model could detect more vessels in diverse in vivo datasets while achieving comparable resolutions to the standard method. The proposed ViT-based model, seamlessly integrated with state-of-the-art downstream ULM steps, improved the overall ULM performance with no priors.


Persistent Identifierhttp://hdl.handle.net/10722/356050
ISSN
2023 Impact Factor: 3.0
2023 SCImago Journal Rankings: 0.945

 

DC FieldValueLanguage
dc.contributor.authorWang, Renxian-
dc.contributor.authorLee, Wei-Ning-
dc.date.accessioned2025-05-22T00:35:22Z-
dc.date.available2025-05-22T00:35:22Z-
dc.date.issued2025-05-12-
dc.identifier.citationIEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, 2025-
dc.identifier.issn0885-3010-
dc.identifier.urihttp://hdl.handle.net/10722/356050-
dc.description.abstract<p>Ultrasound localization microscopy (ULM) has revolutionized microvascular imaging by breaking the acoustic diffraction limit. However, different ULM workflows depend heavily on distinct prior knowledge, such as the impulse response and empirical selection of parameters (e.g., the number of microbubbles (MBs) per frame M), or the consistency of training-test dataset in deep learning (DL)-based studies. We hereby propose a general ULM pipeline that reduces priors. Our approach leverages a DL model that simultaneously distills microbubble signals and reduces speckle from every frame without estimating the impulse response and M. Our method features an efficient channel attention vision transformer (ViT) and a progressive learning strategy, enabling it to learn global information through training on progressively increasing patch sizes. Ample synthetic data were generated using the k-Wave toolbox to simulate various MB patterns, thus overcoming the deficiency of labeled data. The ViT output was further processed by a standard radial symmetry method for sub-pixel localization. Our method performed well on model-unseen public datasets: one in silico dataset with ground truth and four in vivo datasets of mouse tumor, rat brain, rat brain bolus, and rat kidney. Our pipeline outperformed conventional ULM, achieving higher positive predictive values (precision in DL, 0.88-0.41 vs. 0.83-0.16) and improved accuracy (root-mean-square errors: 0.25-0.14 λ vs. 0.31-0.13 λ) across a range of signal-to-noise ratios from 60 dB to 10 dB. Our model could detect more vessels in diverse in vivo datasets while achieving comparable resolutions to the standard method. The proposed ViT-based model, seamlessly integrated with state-of-the-art downstream ULM steps, improved the overall ULM performance with no priors.<br></p>-
dc.languageeng-
dc.publisherInstitute of Electrical and Electronics Engineers-
dc.relation.ispartofIEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.titleAutomated Microbubble Discrimination in Ultrasound Localization Microscopy by Vision Transformer-
dc.typeArticle-
dc.identifier.doi10.1109/TUFFC.2025.3570496-
dc.identifier.eissn1525-8955-
dc.identifier.issnl0885-3010-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats