File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.21037/qims-22-531
- Scopus: eid_2-s2.0-85147155108
- WOS: WOS:000890272900001
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: Deep learning attention-guided radiomics for COVID-19 chest radiograph classification
Title | Deep learning attention-guided radiomics for COVID-19 chest radiograph classification |
---|---|
Authors | |
Keywords | chest radiograph classification Coronavirus disease 2019 (COVID-19) deep learning radiomics |
Issue Date | 1-Feb-2023 |
Publisher | AME Publishing |
Citation | Quantitative Imaging in Medicine and Surgery, 2023, v. 13, n. 2, p. 572-584 How to Cite? |
Abstract | Background: Accurate assessment of coronavirus disease 2019 (COVID-19) lung involvement through chest radiograph plays an important role in effective management of the infection. This study aims to develop a two-step feature merging method to integrate image features from deep learning and radiomics to differentiate COVID-19, non-COVID-19 pneumonia and normal chest radiographs (CXR). Methods: In this study, a deformable convolutional neural network (deformable CNN) was developed and used as a feature extractor to obtain 1,024-dimensional deep learning latent representation (DLR) features. Then 1,069-dimensional radiomics features were extracted from the region of interest (ROI) guided by deformable CNN’s attention. The two feature sets were concatenated to generate a merged feature set for classification. For comparative experiments, the same process has been applied to the DLR-only feature set for verifying the effectiveness of feature concatenation. Results: Using the merged feature set resulted in an overall average accuracy of 91.0% for three-class classification, representing a statistically significant improvement of 0.6% compared to the DLR-only classification. The recall and precision of classification into the COVID-19 class were 0.926 and 0.976, respectively. The feature merging method was shown to significantly improve the classification performance as compared to using only deep learning features, regardless of choice of classifier (P value <0.0001). Three classes’ F1-score were 0.892, 0.890, and 0.950 correspondingly (i.e., normal, non-COVID-19 pneumonia, COVID-19). Conclusions: A two-step COVID-19 classification framework integrating information from both DLR and radiomics features (guided by deep learning attention mechanism) has been developed. The proposed feature merging method has been shown to improve the performance of chest radiograph classification as compared to the case of using only deep learning features. |
Persistent Identifier | http://hdl.handle.net/10722/338905 |
ISSN | 2023 Impact Factor: 2.9 2023 SCImago Journal Rankings: 0.746 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Yang, DR | - |
dc.contributor.author | Ren, G | - |
dc.contributor.author | Ni, RY | - |
dc.contributor.author | Huang, YH | - |
dc.contributor.author | Lam, NFD | - |
dc.contributor.author | Sun, HF | - |
dc.contributor.author | Wan, SBN | - |
dc.contributor.author | Wong, MFE | - |
dc.contributor.author | Chan, KK | - |
dc.contributor.author | Tsang, HCH | - |
dc.contributor.author | Xu, L | - |
dc.contributor.author | Wu, TC | - |
dc.contributor.author | Kong, FM | - |
dc.contributor.author | Wáng, YXJ | - |
dc.contributor.author | Qin, J | - |
dc.contributor.author | Chan, LWC | - |
dc.contributor.author | Ying, M | - |
dc.contributor.author | Cai, J | - |
dc.date.accessioned | 2024-03-11T10:32:25Z | - |
dc.date.available | 2024-03-11T10:32:25Z | - |
dc.date.issued | 2023-02-01 | - |
dc.identifier.citation | Quantitative Imaging in Medicine and Surgery, 2023, v. 13, n. 2, p. 572-584 | - |
dc.identifier.issn | 2223-4292 | - |
dc.identifier.uri | http://hdl.handle.net/10722/338905 | - |
dc.description.abstract | <p><strong>Background: </strong>Accurate assessment of coronavirus disease 2019 (COVID-19) lung involvement through chest radiograph plays an important role in effective management of the infection. This study aims to develop a two-step feature merging method to integrate image features from deep learning and radiomics to differentiate COVID-19, non-COVID-19 pneumonia and normal chest radiographs (CXR).</p><p><strong>Methods: </strong>In this study, a deformable convolutional neural network (deformable CNN) was developed and used as a feature extractor to obtain 1,024-dimensional deep learning latent representation (DLR) features. Then 1,069-dimensional radiomics features were extracted from the region of interest (ROI) guided by deformable CNN’s attention. The two feature sets were concatenated to generate a merged feature set for classification. For comparative experiments, the same process has been applied to the DLR-only feature set for verifying the effectiveness of feature concatenation.</p><p><strong>Results: </strong>Using the merged feature set resulted in an overall average accuracy of 91.0% for three-class classification, representing a statistically significant improvement of 0.6% compared to the DLR-only classification. The recall and precision of classification into the COVID-19 class were 0.926 and 0.976, respectively. The feature merging method was shown to significantly improve the classification performance as compared to using only deep learning features, regardless of choice of classifier (P value <0.0001). Three classes’ F1-score were 0.892, 0.890, and 0.950 correspondingly (i.e., normal, non-COVID-19 pneumonia, COVID-19).</p><p><strong>Conclusions: </strong>A two-step COVID-19 classification framework integrating information from both DLR and radiomics features (guided by deep learning attention mechanism) has been developed. The proposed feature merging method has been shown to improve the performance of chest radiograph classification as compared to the case of using only deep learning features.</p> | - |
dc.language | eng | - |
dc.publisher | AME Publishing | - |
dc.relation.ispartof | Quantitative Imaging in Medicine and Surgery | - |
dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
dc.subject | chest radiograph | - |
dc.subject | classification | - |
dc.subject | Coronavirus disease 2019 (COVID-19) | - |
dc.subject | deep learning | - |
dc.subject | radiomics | - |
dc.title | Deep learning attention-guided radiomics for COVID-19 chest radiograph classification | - |
dc.type | Article | - |
dc.identifier.doi | 10.21037/qims-22-531 | - |
dc.identifier.scopus | eid_2-s2.0-85147155108 | - |
dc.identifier.volume | 13 | - |
dc.identifier.issue | 2 | - |
dc.identifier.spage | 572 | - |
dc.identifier.epage | 584 | - |
dc.identifier.eissn | 2223-4306 | - |
dc.identifier.isi | WOS:000890272900001 | - |
dc.identifier.issnl | 2223-4306 | - |