File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Generalized radiograph representation learning via cross-supervision between images and free-text radiology reports

TitleGeneralized radiograph representation learning via cross-supervision between images and free-text radiology reports
Authors
Issue Date2022
Citation
Nature Machine Intelligence, 2022, v. 4 n. 1, p. 32-40 How to Cite?
AbstractPre-training lays the foundation for recent successes in radiograph analysis supported by deep learning. It learns transferable image representations by conducting large-scale fully- or self-supervised learning on a source domain; however, supervised pre-training requires a complex and labour-intensive two-stage human-assisted annotation process, whereas self-supervised learning cannot compete with the supervised paradigm. To tackle these issues, we propose a cross-supervised methodology called reviewing free-text reports for supervision (REFERS), which acquires free supervision signals from the original radiology reports accompanying the radiographs. The proposed approach employs a vision transformer and is designed to learn joint representations from multiple views within every patient study. REFERS outperforms its transfer learning and self-supervised learning counterparts on four well-known X-ray datasets under extremely limited supervision. Moreover, REFERS even surpasses methods based on a source domain of radiographs with human-assisted structured labels; it therefore has the potential to replace canonical pre-training methodologies.
Persistent Identifierhttp://hdl.handle.net/10722/315047
ISSN
2021 Impact Factor: 25.898
2020 SCImago Journal Rankings: 4.894
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorZhou, H-
dc.contributor.authorChen, X-
dc.contributor.authorZhang, Y-
dc.contributor.authorLuo, R-
dc.contributor.authorWang, L-
dc.contributor.authorYu, Y-
dc.date.accessioned2022-08-05T09:39:14Z-
dc.date.available2022-08-05T09:39:14Z-
dc.date.issued2022-
dc.identifier.citationNature Machine Intelligence, 2022, v. 4 n. 1, p. 32-40-
dc.identifier.issn2522-5839-
dc.identifier.urihttp://hdl.handle.net/10722/315047-
dc.description.abstractPre-training lays the foundation for recent successes in radiograph analysis supported by deep learning. It learns transferable image representations by conducting large-scale fully- or self-supervised learning on a source domain; however, supervised pre-training requires a complex and labour-intensive two-stage human-assisted annotation process, whereas self-supervised learning cannot compete with the supervised paradigm. To tackle these issues, we propose a cross-supervised methodology called reviewing free-text reports for supervision (REFERS), which acquires free supervision signals from the original radiology reports accompanying the radiographs. The proposed approach employs a vision transformer and is designed to learn joint representations from multiple views within every patient study. REFERS outperforms its transfer learning and self-supervised learning counterparts on four well-known X-ray datasets under extremely limited supervision. Moreover, REFERS even surpasses methods based on a source domain of radiographs with human-assisted structured labels; it therefore has the potential to replace canonical pre-training methodologies.-
dc.languageeng-
dc.relation.ispartofNature Machine Intelligence-
dc.titleGeneralized radiograph representation learning via cross-supervision between images and free-text radiology reports-
dc.typeArticle-
dc.identifier.emailLuo, R: rbluo@cs.hku.hk-
dc.identifier.emailYu, Y: yzyu@cs.hku.hk-
dc.identifier.authorityLuo, R=rp02360-
dc.identifier.authorityYu, Y=rp01415-
dc.identifier.doi10.1038/s42256-021-00425-9-
dc.identifier.hkuros335141-
dc.identifier.volume4-
dc.identifier.issue1-
dc.identifier.spage32-
dc.identifier.epage40-
dc.identifier.isiWOS:000744928500001-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats