File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Cross-View Deformable Transformer for Non-displaced Hip Fracture Classification from Frontal-Lateral X-Ray Pair

TitleCross-View Deformable Transformer for Non-displaced Hip Fracture Classification from Frontal-Lateral X-Ray Pair
Authors
KeywordsCross-view correspondence
Deformable transformer
Hip fracture diagnosis
X-ray image
Issue Date1-Oct-2023
Abstract

Hip fractures are a common cause of morbidity and mortality and are usually diagnosed from the X-ray images in clinical routine. Deep learning has achieved promising progress for automatic hip fracture detection. However, for fractures where displacement appears not obvious (i.e., non-displaced fracture), the single-view X-ray image can only provide limited diagnostic information and integrating features from cross-view X-ray images (i.e., Frontal/Lateral-view) is needed for an accurate diagnosis. Nevertheless, it remains a technically challenging task to find reliable and discriminative cross-view representations for automatic diagnosis. First, it is difficult to locate discriminative task-related features in each X-ray view due to the weak supervision of image-level classification labels. Second, it is hard to extract reliable complementary information between different X-ray views as there is a displacement between them. To address the above challenges, this paper presents a novel cross-view deformable transformer framework to model relations of critical representations between different views for non-displaced hip fracture identification. Specifically, we adopt a deformable self-attention module to localize discriminative task-related features for each X-ray view only with the image-level label. Moreover, the located discriminative features are further adopted to explore correlated representations across views by taking advantage of the query of the dominated view as guidance. Furthermore, we build a dataset including 768 hip cases, in which each case has paired hip X-ray images (Frontal/Lateral-view), to evaluate our framework for the non-displaced fracture and normal hip classification task.


Persistent Identifierhttp://hdl.handle.net/10722/339356
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorZhu, Z-
dc.contributor.authorChen, Q-
dc.contributor.authorYu, L-
dc.contributor.authorWang, L-
dc.contributor.authorZhang, D-
dc.contributor.authorMagnier, B-
dc.contributor.authorWang, L -
dc.date.accessioned2024-03-11T10:35:57Z-
dc.date.available2024-03-11T10:35:57Z-
dc.date.issued2023-10-01-
dc.identifier.urihttp://hdl.handle.net/10722/339356-
dc.description.abstract<p>Hip fractures are a common cause of morbidity and mortality and are usually diagnosed from the X-ray images in clinical routine. Deep learning has achieved promising progress for automatic hip fracture detection. However, for fractures where displacement appears not obvious (<em>i.e.</em>, non-displaced fracture), the single-view X-ray image can only provide limited diagnostic information and integrating features from cross-view X-ray images (<em>i.e.</em>, Frontal/Lateral-view) is needed for an accurate diagnosis. Nevertheless, it remains a technically challenging task to find reliable and discriminative cross-view representations for automatic diagnosis. First, it is difficult to locate discriminative task-related features in each X-ray view due to the weak supervision of image-level classification labels. Second, it is hard to extract reliable complementary information between different X-ray views as there is a displacement between them. To address the above challenges, this paper presents a novel cross-view deformable transformer framework to model relations of critical representations between different views for non-displaced hip fracture identification. Specifically, we adopt a deformable self-attention module to localize discriminative task-related features for each X-ray view only with the image-level label. Moreover, the located discriminative features are further adopted to explore correlated representations across views by taking advantage of the query of the dominated view as guidance. Furthermore, we build a dataset including 768 hip cases, in which each case has paired hip X-ray images (Frontal/Lateral-view), to evaluate our framework for the non-displaced fracture and normal hip classification task.<br></p>-
dc.languageeng-
dc.relation.ispartof26th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2023 (08/10/2023-12/10/2023, Vancouver, Canada)-
dc.subjectCross-view correspondence-
dc.subjectDeformable transformer-
dc.subjectHip fracture diagnosis-
dc.subjectX-ray image-
dc.titleCross-View Deformable Transformer for Non-displaced Hip Fracture Classification from Frontal-Lateral X-Ray Pair-
dc.typeConference_Paper-
dc.identifier.doi10.1007/978-3-031-43987-2_43-
dc.identifier.scopuseid_2-s2.0-85174737307-
dc.identifier.volume14225 LNCS-
dc.identifier.spage444-
dc.identifier.epage453-
dc.identifier.isiWOS:001109635100043-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats