File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Deep learning for the automatic detection and segmentation of parotid gland tumors on MRI

TitleDeep learning for the automatic detection and segmentation of parotid gland tumors on MRI
Authors
KeywordsAutomatic tumor detection and segmentation
Deep learning
Non-contrast-enhanced MRI
Parotid gland tumors
Issue Date2024
Citation
Oral Oncology, 2024, v. 152, article no. 106796 How to Cite?
AbstractObjectives: Parotid gland tumors (PGTs) often occur as incidental findings on magnetic resonance images (MRI) that may be overlooked. This study aimed to construct and validate a deep learning model to automatically identify parotid glands (PGs) with a PGT from normal PGs, and in those with a PGT to segment the tumor. Materials and methods: The nnUNet combined with a PG-specific post-processing procedure was used to develop the deep learning model trained on T1-weighed images (T1WI) in 311 patients (180 PGs with tumors and 442 normal PGs) and fat-suppressed (FS)-T2WI in 257 patients (125 PGs with tumors and 389 normal PGs), for detecting and segmenting PGTs with five-fold cross-validation. Additional validation set separated by time, comprising T1WI in 34 and FS-T2WI in 41 patients, was used to validate the model performance. Results and conclusion: To identify PGs with tumors from normal PGs, using combined T1WI and FS-T2WI, the deep learning model achieved an accuracy, sensitivity and specificity of 98.2% (497/506), 100% (119/119) and 97.7% (378/387), respectively, in the cross-validation set and 98.5% (67/68), 100% (20/20) and 97.9% (47/48), respectively, in the validation set. For patients with PGTs, automatic segmentation of PGTs on T1WI and FS-T2WI achieved mean dice coefficients of 86.1% and 84.2%, respectively, in the cross-validation set, and of 85.9% and 81.0%, respectively, in the validation set. The proposed deep learning model may assist the detection and segmentation of PGTs and, by acting as a second pair of eyes, ensure that incidentally detected PGTs on MRI are not missed.
Persistent Identifierhttp://hdl.handle.net/10722/353166
ISSN
2023 Impact Factor: 4.0
2023 SCImago Journal Rankings: 1.257

 

DC FieldValueLanguage
dc.contributor.authorZhang, Rongli-
dc.contributor.authorWong, Lun M.-
dc.contributor.authorSo, Tiffany Y.-
dc.contributor.authorCai, Zongyou-
dc.contributor.authorDeng, Qiao-
dc.contributor.authorTsang, Yip Man-
dc.contributor.authorAi, Qi Yong H.-
dc.contributor.authorKing, Ann D.-
dc.date.accessioned2025-01-13T03:02:25Z-
dc.date.available2025-01-13T03:02:25Z-
dc.date.issued2024-
dc.identifier.citationOral Oncology, 2024, v. 152, article no. 106796-
dc.identifier.issn1368-8375-
dc.identifier.urihttp://hdl.handle.net/10722/353166-
dc.description.abstractObjectives: Parotid gland tumors (PGTs) often occur as incidental findings on magnetic resonance images (MRI) that may be overlooked. This study aimed to construct and validate a deep learning model to automatically identify parotid glands (PGs) with a PGT from normal PGs, and in those with a PGT to segment the tumor. Materials and methods: The nnUNet combined with a PG-specific post-processing procedure was used to develop the deep learning model trained on T1-weighed images (T1WI) in 311 patients (180 PGs with tumors and 442 normal PGs) and fat-suppressed (FS)-T2WI in 257 patients (125 PGs with tumors and 389 normal PGs), for detecting and segmenting PGTs with five-fold cross-validation. Additional validation set separated by time, comprising T1WI in 34 and FS-T2WI in 41 patients, was used to validate the model performance. Results and conclusion: To identify PGs with tumors from normal PGs, using combined T1WI and FS-T2WI, the deep learning model achieved an accuracy, sensitivity and specificity of 98.2% (497/506), 100% (119/119) and 97.7% (378/387), respectively, in the cross-validation set and 98.5% (67/68), 100% (20/20) and 97.9% (47/48), respectively, in the validation set. For patients with PGTs, automatic segmentation of PGTs on T1WI and FS-T2WI achieved mean dice coefficients of 86.1% and 84.2%, respectively, in the cross-validation set, and of 85.9% and 81.0%, respectively, in the validation set. The proposed deep learning model may assist the detection and segmentation of PGTs and, by acting as a second pair of eyes, ensure that incidentally detected PGTs on MRI are not missed.-
dc.languageeng-
dc.relation.ispartofOral Oncology-
dc.subjectAutomatic tumor detection and segmentation-
dc.subjectDeep learning-
dc.subjectNon-contrast-enhanced MRI-
dc.subjectParotid gland tumors-
dc.titleDeep learning for the automatic detection and segmentation of parotid gland tumors on MRI-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1016/j.oraloncology.2024.106796-
dc.identifier.pmid38615586-
dc.identifier.scopuseid_2-s2.0-85190239821-
dc.identifier.volume152-
dc.identifier.spagearticle no. 106796-
dc.identifier.epagearticle no. 106796-
dc.identifier.eissn1879-0593-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats