File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1007/978-3-031-43987-2_43
- Scopus: eid_2-s2.0-85174737307
- WOS: WOS:001109635100043
Supplementary
- Citations:
- Appears in Collections:
Conference Paper: Cross-View Deformable Transformer for Non-displaced Hip Fracture Classification from Frontal-Lateral X-Ray Pair
Title | Cross-View Deformable Transformer for Non-displaced Hip Fracture Classification from Frontal-Lateral X-Ray Pair |
---|---|
Authors | |
Keywords | Cross-view correspondence Deformable transformer Hip fracture diagnosis X-ray image |
Issue Date | 1-Oct-2023 |
Abstract | Hip fractures are a common cause of morbidity and mortality and are usually diagnosed from the X-ray images in clinical routine. Deep learning has achieved promising progress for automatic hip fracture detection. However, for fractures where displacement appears not obvious (i.e., non-displaced fracture), the single-view X-ray image can only provide limited diagnostic information and integrating features from cross-view X-ray images (i.e., Frontal/Lateral-view) is needed for an accurate diagnosis. Nevertheless, it remains a technically challenging task to find reliable and discriminative cross-view representations for automatic diagnosis. First, it is difficult to locate discriminative task-related features in each X-ray view due to the weak supervision of image-level classification labels. Second, it is hard to extract reliable complementary information between different X-ray views as there is a displacement between them. To address the above challenges, this paper presents a novel cross-view deformable transformer framework to model relations of critical representations between different views for non-displaced hip fracture identification. Specifically, we adopt a deformable self-attention module to localize discriminative task-related features for each X-ray view only with the image-level label. Moreover, the located discriminative features are further adopted to explore correlated representations across views by taking advantage of the query of the dominated view as guidance. Furthermore, we build a dataset including 768 hip cases, in which each case has paired hip X-ray images (Frontal/Lateral-view), to evaluate our framework for the non-displaced fracture and normal hip classification task. |
Persistent Identifier | http://hdl.handle.net/10722/339356 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zhu, Z | - |
dc.contributor.author | Chen, Q | - |
dc.contributor.author | Yu, L | - |
dc.contributor.author | Wang, L | - |
dc.contributor.author | Zhang, D | - |
dc.contributor.author | Magnier, B | - |
dc.contributor.author | Wang, L | - |
dc.date.accessioned | 2024-03-11T10:35:57Z | - |
dc.date.available | 2024-03-11T10:35:57Z | - |
dc.date.issued | 2023-10-01 | - |
dc.identifier.uri | http://hdl.handle.net/10722/339356 | - |
dc.description.abstract | <p>Hip fractures are a common cause of morbidity and mortality and are usually diagnosed from the X-ray images in clinical routine. Deep learning has achieved promising progress for automatic hip fracture detection. However, for fractures where displacement appears not obvious (<em>i.e.</em>, non-displaced fracture), the single-view X-ray image can only provide limited diagnostic information and integrating features from cross-view X-ray images (<em>i.e.</em>, Frontal/Lateral-view) is needed for an accurate diagnosis. Nevertheless, it remains a technically challenging task to find reliable and discriminative cross-view representations for automatic diagnosis. First, it is difficult to locate discriminative task-related features in each X-ray view due to the weak supervision of image-level classification labels. Second, it is hard to extract reliable complementary information between different X-ray views as there is a displacement between them. To address the above challenges, this paper presents a novel cross-view deformable transformer framework to model relations of critical representations between different views for non-displaced hip fracture identification. Specifically, we adopt a deformable self-attention module to localize discriminative task-related features for each X-ray view only with the image-level label. Moreover, the located discriminative features are further adopted to explore correlated representations across views by taking advantage of the query of the dominated view as guidance. Furthermore, we build a dataset including 768 hip cases, in which each case has paired hip X-ray images (Frontal/Lateral-view), to evaluate our framework for the non-displaced fracture and normal hip classification task.<br></p> | - |
dc.language | eng | - |
dc.relation.ispartof | 26th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2023 (08/10/2023-12/10/2023, Vancouver, Canada) | - |
dc.subject | Cross-view correspondence | - |
dc.subject | Deformable transformer | - |
dc.subject | Hip fracture diagnosis | - |
dc.subject | X-ray image | - |
dc.title | Cross-View Deformable Transformer for Non-displaced Hip Fracture Classification from Frontal-Lateral X-Ray Pair | - |
dc.type | Conference_Paper | - |
dc.identifier.doi | 10.1007/978-3-031-43987-2_43 | - |
dc.identifier.scopus | eid_2-s2.0-85174737307 | - |
dc.identifier.volume | 14225 LNCS | - |
dc.identifier.spage | 444 | - |
dc.identifier.epage | 453 | - |
dc.identifier.isi | WOS:001109635100043 | - |