File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Joint Cancer Segmentation and PI-RADS Classification on Multiparametric MRI Using MiniSegCaps Network

TitleJoint Cancer Segmentation and PI-RADS Classification on Multiparametric MRI Using MiniSegCaps Network
Authors
KeywordsCapsuleNet
convolutional neural network
multi-parametric MRI
PI-RADS classification
prostate cancer
Issue Date7-Feb-2023
PublisherMDPI
Citation
Diagnostics, 2023, v. 13, n. 4 How to Cite?
Abstract

MRI is the primary imaging approach for diagnosing prostate cancer. Prostate ImagingReporting and Data System (PI-RADS) on multiparametric MRI (mpMRI) provides fundamentalMRI interpretation guidelines but suffers from inter-reader variability. Deep learning networks showgreat promise in automatic lesion segmentation and classification, which help to ease the burdenon radiologists and reduce inter-reader variability. In this study, we proposed a novel multi-branchnetwork, MiniSegCaps, for prostate cancer segmentation and PI-RADS classification on mpMRI.MiniSeg branch outputted the segmentation in conjunction with PI-RADS prediction, guided by theattention map from the CapsuleNet. CapsuleNet branch exploited the relative spatial information ofprostate cancer to anatomical structures, such as the zonal location of the lesion, which also reducedthe sample size requirement in training due to its equivariance properties. In addition, a gatedrecurrent unit (GRU) is adopted to exploit spatial knowledge across slices, improving through-planeconsistency. Based on the clinical reports, we established a prostate mpMRI database from 462 patientspaired with radiologically estimated annotations. MiniSegCaps was trained and evaluated withfivefold cross-validation. On 93 testing cases, our model achieved a 0.712 dice coefficient on lesionsegmentation, 89.18% accuracy, and 92.52% sensitivity on PI-RADS classification (PI-RADS ≥ 4) inpatient-level evaluation, significantly outperforming existing methods. In addition, a graphical userinterface (GUI) integrated into the clinical workflow can automatically produce diagnosis reportsbased on the results from MiniSegCaps.


Persistent Identifierhttp://hdl.handle.net/10722/338218
ISSN
2023 Impact Factor: 3.0
2023 SCImago Journal Rankings: 0.667
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorJiang, Wenting-
dc.contributor.authorLin, Yingying-
dc.contributor.authorVardhanabhuti, Varut-
dc.contributor.authorMing, Yanzhen-
dc.contributor.authorCao, Peng-
dc.date.accessioned2024-03-11T10:27:09Z-
dc.date.available2024-03-11T10:27:09Z-
dc.date.issued2023-02-07-
dc.identifier.citationDiagnostics, 2023, v. 13, n. 4-
dc.identifier.issn2075-4418-
dc.identifier.urihttp://hdl.handle.net/10722/338218-
dc.description.abstract<p>MRI is the primary imaging approach for diagnosing prostate cancer. Prostate ImagingReporting and Data System (PI-RADS) on multiparametric MRI (mpMRI) provides fundamentalMRI interpretation guidelines but suffers from inter-reader variability. Deep learning networks showgreat promise in automatic lesion segmentation and classification, which help to ease the burdenon radiologists and reduce inter-reader variability. In this study, we proposed a novel multi-branchnetwork, MiniSegCaps, for prostate cancer segmentation and PI-RADS classification on mpMRI.MiniSeg branch outputted the segmentation in conjunction with PI-RADS prediction, guided by theattention map from the CapsuleNet. CapsuleNet branch exploited the relative spatial information ofprostate cancer to anatomical structures, such as the zonal location of the lesion, which also reducedthe sample size requirement in training due to its equivariance properties. In addition, a gatedrecurrent unit (GRU) is adopted to exploit spatial knowledge across slices, improving through-planeconsistency. Based on the clinical reports, we established a prostate mpMRI database from 462 patientspaired with radiologically estimated annotations. MiniSegCaps was trained and evaluated withfivefold cross-validation. On 93 testing cases, our model achieved a 0.712 dice coefficient on lesionsegmentation, 89.18% accuracy, and 92.52% sensitivity on PI-RADS classification (PI-RADS ≥ 4) inpatient-level evaluation, significantly outperforming existing methods. In addition, a graphical userinterface (GUI) integrated into the clinical workflow can automatically produce diagnosis reportsbased on the results from MiniSegCaps.<br></p>-
dc.languageeng-
dc.publisherMDPI-
dc.relation.ispartofDiagnostics-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectCapsuleNet-
dc.subjectconvolutional neural network-
dc.subjectmulti-parametric MRI-
dc.subjectPI-RADS classification-
dc.subjectprostate cancer-
dc.titleJoint Cancer Segmentation and PI-RADS Classification on Multiparametric MRI Using MiniSegCaps Network-
dc.typeArticle-
dc.identifier.doi10.3390/diagnostics13040615-
dc.identifier.scopuseid_2-s2.0-85149140147-
dc.identifier.volume13-
dc.identifier.issue4-
dc.identifier.eissn2075-4418-
dc.identifier.isiWOS:000939129600001-
dc.identifier.issnl2075-4418-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats