File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: SpineHRformer: A Transformer-Based Deep Learning Model for Automatic Spine Deformity Assessment with Prospective Validation

TitleSpineHRformer: A Transformer-Based Deep Learning Model for Automatic Spine Deformity Assessment with Prospective Validation
Authors
Issue Date20-Nov-2023
PublisherMDPI
Citation
Bioengineering, 2023, v. 10, n. 11 How to Cite?
Abstract

The Cobb angle (CA) serves as the principal method for assessing spinal deformity, but manual measurements of the CA are time-consuming and susceptible to inter- and intra-observer variability. While learning-based methods, such as SpineHRNet+, have demonstrated potential in automating CA measurement, their accuracy can be influenced by the severity of spinal deformity, image quality, relative position of rib and vertebrae, etc. Our aim is to create a reliable learning-based approach that provides consistent and highly accurate measurements of the CA from posteroanterior (PA) X-rays, surpassing the state-of-the-art method. To accomplish this, we introduce SpineHRformer, which identifies anatomical landmarks, including the vertices of endplates from the 7th cervical vertebra (C7) to the 5th lumbar vertebra (L5) and the end vertebrae with different output heads, enabling the calculation of CAs. Within our SpineHRformer, a backbone HRNet first extracts multi-scale features from the input X-ray, while transformer blocks extract local and global features from the HRNet outputs. Subsequently, an output head to generate heatmaps of the endplate landmarks or end vertebra landmarks facilitates the computation of CAs. We used a dataset of 1934 PA X-rays with diverse degrees of spinal deformity and image quality, following an 8:2 ratio to train and test the model. The experimental results indicate that SpineHRformer outperforms SpineHRNet+ in landmark detection (Mean Euclidean Distance: 2.47 pixels vs. 2.74 pixels), CA prediction (Pearson correlation coefficient: 0.86 vs. 0.83), and severity grading (sensitivity: normal-mild; 0.93 vs. 0.74, moderate; 0.74 vs. 0.77, severe; 0.74 vs. 0.7). Our approach demonstrates greater robustness and accuracy compared to SpineHRNet+, offering substantial potential for improving the efficiency and reliability of CA measurements in clinical settings.


Persistent Identifierhttp://hdl.handle.net/10722/335659
ISSN
2023 Impact Factor: 3.8
2023 SCImago Journal Rankings: 0.627

 

DC FieldValueLanguage
dc.contributor.authorZhao, Moxin-
dc.contributor.authorMeng, Nan-
dc.contributor.authorCheung, Jason Pui Yin-
dc.contributor.authorYu, Chenxi-
dc.contributor.authorLu, Pengyu-
dc.contributor.authorZhang, Teng-
dc.date.accessioned2023-12-04T09:37:28Z-
dc.date.available2023-12-04T09:37:28Z-
dc.date.issued2023-11-20-
dc.identifier.citationBioengineering, 2023, v. 10, n. 11-
dc.identifier.issn2306-5354-
dc.identifier.urihttp://hdl.handle.net/10722/335659-
dc.description.abstract<p>The Cobb angle (CA) serves as the principal method for assessing spinal deformity, but manual measurements of the CA are time-consuming and susceptible to inter- and intra-observer variability. While learning-based methods, such as SpineHRNet+, have demonstrated potential in automating CA measurement, their accuracy can be influenced by the severity of spinal deformity, image quality, relative position of rib and vertebrae, etc. Our aim is to create a reliable learning-based approach that provides consistent and highly accurate measurements of the CA from posteroanterior (PA) X-rays, surpassing the state-of-the-art method. To accomplish this, we introduce SpineHRformer, which identifies anatomical landmarks, including the vertices of endplates from the 7th cervical vertebra (C7) to the 5th lumbar vertebra (L5) and the end vertebrae with different output heads, enabling the calculation of CAs. Within our SpineHRformer, a backbone HRNet first extracts multi-scale features from the input X-ray, while transformer blocks extract local and global features from the HRNet outputs. Subsequently, an output head to generate heatmaps of the endplate landmarks or end vertebra landmarks facilitates the computation of CAs. We used a dataset of 1934 PA X-rays with diverse degrees of spinal deformity and image quality, following an 8:2 ratio to train and test the model. The experimental results indicate that SpineHRformer outperforms SpineHRNet+ in landmark detection (Mean Euclidean Distance: 2.47 pixels vs. 2.74 pixels), CA prediction (Pearson correlation coefficient: 0.86 vs. 0.83), and severity grading (sensitivity: normal-mild; 0.93 vs. 0.74, moderate; 0.74 vs. 0.77, severe; 0.74 vs. 0.7). Our approach demonstrates greater robustness and accuracy compared to SpineHRNet+, offering substantial potential for improving the efficiency and reliability of CA measurements in clinical settings.</p>-
dc.languageeng-
dc.publisherMDPI-
dc.relation.ispartofBioengineering-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.titleSpineHRformer: A Transformer-Based Deep Learning Model for Automatic Spine Deformity Assessment with Prospective Validation-
dc.typeArticle-
dc.identifier.doi10.3390/bioengineering10111333-
dc.identifier.volume10-
dc.identifier.issue11-
dc.identifier.eissn2306-5354-
dc.identifier.issnl2306-5354-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats