File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)

Conference Paper: High diagnostic performance of a deep learning artificial intelligence model in accurately diagnosing hepatocellular carcinoma on computed tomography

TitleHigh diagnostic performance of a deep learning artificial intelligence model in accurately diagnosing hepatocellular carcinoma on computed tomography
Authors
Issue Date2020
PublisherJohn Wiley & Sons, Inc. The Journal's web site is located at http://www.hepatology.org/
Citation
The Annual Meeting of the American Association for the Study of Liver Diseases (AASLD): The Liver Meeting Digital Experience 2020, Boston, USA, 13-16 November 2020. In Hepatology, 2020, v. 72 n. S1, p. 84A-85A, abstract no. 114 How to Cite?
AbstractBackground: Hepatocellular carcinoma (HCC) is globally the fourth leading cause of cancer death. The Liver Imaging Reporting and Data System (LI-RADS) categorizes radiological liver lesions based on their likelihood of HCC. Intermediate LI-RADS categories do not offer a definitive diagnosis, resulting in repeated scans for patients and eventually management and treatment delays. We developed a deep learning artificial intelligence model in the detection, segmentation and classification of liver lesions on computed tomography (CT), applying a state-of-the-art convoluted neural network (CNN) architecture. Methods: We retrieved of archived tri-phasic liver CT images and clinical information, and manually contoured and labelled all images with diagnostic ground-truth. We followed AASLD recommendations for HCC diagnosis and employed LI-RADS classification in lesion categorization. Diagnosis was validated by a clinical composite reference standard based on patients’ outcomes over the subsequent 12 months. We employed data augmentation over all CT phases, and constructed a densely-CNN-based classification model (NVIDIA Tesla V100 GPUs, Dell Technologies) that consisted of interconnecting four dense blocks and a fully-connected layer with an activation function for classification (Figure A), identifying HCC vs. non-HCC. Results: In this interim analysis, we retrieved and contoured 1288 scans with 2551 liver lesions. The cohort’s mean age was 59.4±13.6 years, 65.0% male. Mean lesion size was 36.6±44.5 mm, with 826 lesions (33.1%) validated as HCC. Heat maps were generated and superimposed on original images based on the model’s prediction of likelihood of lesion location (Figure B). After optimizing over 115 million radiological parameters in model development and dividing scanned lesions to a training set and testing set in a 7:3 ratio, the densely-CNN achieved a diagnostic accuracy of 97.4%, negative predictive value (NPV) 98.3%, positive predictive value (PPV) 96.1%, sensitivity 97.4% and specificity 97. 5% in the binary classification of HCC. This was compared to the diagnostic accuracy of 86.2%, NPV 86.0%, PPV 86.5%, sensitivity 84.6% and specificity 87.8% via LI-RADS. Conclusion: Our artificial intelligence model achieved a high diagnostic accuracy based on an end-to-end densely-CNN in detecting and classifying HCC vs. non-HCC, with an improved diagnostic performance when compared to LI-RADS. Artificial intelligence has the potential in enhancing the diagnostic capabilities of HCC and prevent delays in treatment.
DescriptionOral Presentation - no. 114
Persistent Identifierhttp://hdl.handle.net/10722/305516
ISSN
2021 Impact Factor: 17.298
2020 SCImago Journal Rankings: 5.488

 

DC FieldValueLanguage
dc.contributor.authorSeto, WKW-
dc.contributor.authorChiu, WHK-
dc.contributor.authorYu, PLH-
dc.contributor.authorCao, W-
dc.contributor.authorCheng, HM-
dc.contributor.authorLui, GCS-
dc.contributor.authorWong, EMF-
dc.contributor.authorWu, J-
dc.contributor.authorMak, LY-
dc.contributor.authorShen, X-
dc.contributor.authorLi, WK-
dc.contributor.authorYuen, RMF-
dc.date.accessioned2021-10-20T10:10:31Z-
dc.date.available2021-10-20T10:10:31Z-
dc.date.issued2020-
dc.identifier.citationThe Annual Meeting of the American Association for the Study of Liver Diseases (AASLD): The Liver Meeting Digital Experience 2020, Boston, USA, 13-16 November 2020. In Hepatology, 2020, v. 72 n. S1, p. 84A-85A, abstract no. 114-
dc.identifier.issn0270-9139-
dc.identifier.urihttp://hdl.handle.net/10722/305516-
dc.descriptionOral Presentation - no. 114-
dc.description.abstractBackground: Hepatocellular carcinoma (HCC) is globally the fourth leading cause of cancer death. The Liver Imaging Reporting and Data System (LI-RADS) categorizes radiological liver lesions based on their likelihood of HCC. Intermediate LI-RADS categories do not offer a definitive diagnosis, resulting in repeated scans for patients and eventually management and treatment delays. We developed a deep learning artificial intelligence model in the detection, segmentation and classification of liver lesions on computed tomography (CT), applying a state-of-the-art convoluted neural network (CNN) architecture. Methods: We retrieved of archived tri-phasic liver CT images and clinical information, and manually contoured and labelled all images with diagnostic ground-truth. We followed AASLD recommendations for HCC diagnosis and employed LI-RADS classification in lesion categorization. Diagnosis was validated by a clinical composite reference standard based on patients’ outcomes over the subsequent 12 months. We employed data augmentation over all CT phases, and constructed a densely-CNN-based classification model (NVIDIA Tesla V100 GPUs, Dell Technologies) that consisted of interconnecting four dense blocks and a fully-connected layer with an activation function for classification (Figure A), identifying HCC vs. non-HCC. Results: In this interim analysis, we retrieved and contoured 1288 scans with 2551 liver lesions. The cohort’s mean age was 59.4±13.6 years, 65.0% male. Mean lesion size was 36.6±44.5 mm, with 826 lesions (33.1%) validated as HCC. Heat maps were generated and superimposed on original images based on the model’s prediction of likelihood of lesion location (Figure B). After optimizing over 115 million radiological parameters in model development and dividing scanned lesions to a training set and testing set in a 7:3 ratio, the densely-CNN achieved a diagnostic accuracy of 97.4%, negative predictive value (NPV) 98.3%, positive predictive value (PPV) 96.1%, sensitivity 97.4% and specificity 97. 5% in the binary classification of HCC. This was compared to the diagnostic accuracy of 86.2%, NPV 86.0%, PPV 86.5%, sensitivity 84.6% and specificity 87.8% via LI-RADS. Conclusion: Our artificial intelligence model achieved a high diagnostic accuracy based on an end-to-end densely-CNN in detecting and classifying HCC vs. non-HCC, with an improved diagnostic performance when compared to LI-RADS. Artificial intelligence has the potential in enhancing the diagnostic capabilities of HCC and prevent delays in treatment.-
dc.languageeng-
dc.publisherJohn Wiley & Sons, Inc. The Journal's web site is located at http://www.hepatology.org/-
dc.relation.ispartofHepatology-
dc.relation.ispartofThe Annual Meeting of the American Association for the Study of Liver Diseases (AASLD): The Liver Meeting Digital Experience 2020-
dc.titleHigh diagnostic performance of a deep learning artificial intelligence model in accurately diagnosing hepatocellular carcinoma on computed tomography-
dc.typeConference_Paper-
dc.identifier.emailSeto, WKW: wkseto@hku.hk-
dc.identifier.emailYu, PLH: plhyu@hku.hk-
dc.identifier.emailCao, W: wmingcao@hku.hk-
dc.identifier.emailCheng, HM: hmcheng@hku.hk-
dc.identifier.emailMak, LY: lungyi@hku.hk-
dc.identifier.emailYuen, RMF: mfyuen@hku.hk-
dc.identifier.authoritySeto, WKW=rp01659-
dc.identifier.authorityChiu, WHK=rp02074-
dc.identifier.authorityYu, PLH=rp00835-
dc.identifier.authorityLui, GCS=rp00755-
dc.identifier.authorityMak, LY=rp02668-
dc.identifier.authorityYuen, RMF=rp00479-
dc.description.natureabstract-
dc.identifier.hkuros326965-
dc.identifier.volume72-
dc.identifier.issueS1-
dc.identifier.spage84A-
dc.identifier.epage85A-
dc.publisher.placeUnited States-
dc.identifier.partofdoi10.1002/hep.31578-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats