File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1038/s42256-021-00425-9
- WOS: WOS:000744928500001
- Find via
Supplementary
-
Citations:
- Web of Science: 0
- Appears in Collections:
Article: Generalized radiograph representation learning via cross-supervision between images and free-text radiology reports
Title | Generalized radiograph representation learning via cross-supervision between images and free-text radiology reports |
---|---|
Authors | |
Issue Date | 2022 |
Citation | Nature Machine Intelligence, 2022, v. 4 n. 1, p. 32-40 How to Cite? |
Abstract | Pre-training lays the foundation for recent successes in radiograph analysis supported by deep learning. It learns transferable image representations by conducting large-scale fully- or self-supervised learning on a source domain; however, supervised pre-training requires a complex and labour-intensive two-stage human-assisted annotation process, whereas self-supervised learning cannot compete with the supervised paradigm. To tackle these issues, we propose a cross-supervised methodology called reviewing free-text reports for supervision (REFERS), which acquires free supervision signals from the original radiology reports accompanying the radiographs. The proposed approach employs a vision transformer and is designed to learn joint representations from multiple views within every patient study. REFERS outperforms its transfer learning and self-supervised learning counterparts on four well-known X-ray datasets under extremely limited supervision. Moreover, REFERS even surpasses methods based on a source domain of radiographs with human-assisted structured labels; it therefore has the potential to replace canonical pre-training methodologies. |
Persistent Identifier | http://hdl.handle.net/10722/315047 |
ISSN | 2023 Impact Factor: 18.8 2023 SCImago Journal Rankings: 5.940 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zhou, H | - |
dc.contributor.author | Chen, X | - |
dc.contributor.author | Zhang, Y | - |
dc.contributor.author | Luo, R | - |
dc.contributor.author | Wang, L | - |
dc.contributor.author | Yu, Y | - |
dc.date.accessioned | 2022-08-05T09:39:14Z | - |
dc.date.available | 2022-08-05T09:39:14Z | - |
dc.date.issued | 2022 | - |
dc.identifier.citation | Nature Machine Intelligence, 2022, v. 4 n. 1, p. 32-40 | - |
dc.identifier.issn | 2522-5839 | - |
dc.identifier.uri | http://hdl.handle.net/10722/315047 | - |
dc.description.abstract | Pre-training lays the foundation for recent successes in radiograph analysis supported by deep learning. It learns transferable image representations by conducting large-scale fully- or self-supervised learning on a source domain; however, supervised pre-training requires a complex and labour-intensive two-stage human-assisted annotation process, whereas self-supervised learning cannot compete with the supervised paradigm. To tackle these issues, we propose a cross-supervised methodology called reviewing free-text reports for supervision (REFERS), which acquires free supervision signals from the original radiology reports accompanying the radiographs. The proposed approach employs a vision transformer and is designed to learn joint representations from multiple views within every patient study. REFERS outperforms its transfer learning and self-supervised learning counterparts on four well-known X-ray datasets under extremely limited supervision. Moreover, REFERS even surpasses methods based on a source domain of radiographs with human-assisted structured labels; it therefore has the potential to replace canonical pre-training methodologies. | - |
dc.language | eng | - |
dc.relation.ispartof | Nature Machine Intelligence | - |
dc.title | Generalized radiograph representation learning via cross-supervision between images and free-text radiology reports | - |
dc.type | Article | - |
dc.identifier.email | Luo, R: rbluo@cs.hku.hk | - |
dc.identifier.email | Yu, Y: yzyu@cs.hku.hk | - |
dc.identifier.authority | Luo, R=rp02360 | - |
dc.identifier.authority | Yu, Y=rp01415 | - |
dc.identifier.doi | 10.1038/s42256-021-00425-9 | - |
dc.identifier.hkuros | 335141 | - |
dc.identifier.volume | 4 | - |
dc.identifier.issue | 1 | - |
dc.identifier.spage | 32 | - |
dc.identifier.epage | 40 | - |
dc.identifier.isi | WOS:000744928500001 | - |