File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Preservational Learning Improves Self-supervised Medical Image Models by Reconstructing Diverse Contexts

TitlePreservational Learning Improves Self-supervised Medical Image Models by Reconstructing Diverse Contexts
Authors
KeywordsRepresentation learning
Computer vision
Protocols
Codes
Computational modeling
Issue Date2021
PublisherIEEE Computer Society.
Citation
ICCV Workshop on Deep Multi-Task Learning in Computer Vision (Virtual), Montreal, QC, Canada, October 11-17, 2021. In Proceedings: 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW 2021), p. 3479-3489 How to Cite?
AbstractPreserving maximal information is one of principles of designing self-supervised learning methodologies. To reach this goal, contrastive learning adopts an implicit way which is contrasting image pairs. However, we believe it is not fully optimal to simply use the contrastive estimation for preservation. Moreover, it is necessary and complemental to introduce an explicit solution to preserve more information. From this perspective, we introduce Preservational Learning to reconstruct diverse image contexts in order to preserve more information in learned representations. Together with the contrastive loss, we present Preservational Contrastive Representation Learning (PCRL) for learning self-supervised medical representations. PCRL provides very competitive results under the pretraining-finetuning protocol, outperforming both self-supervised and supervised counterparts in 5 classification/segmentation tasks substantially.
Persistent Identifierhttp://hdl.handle.net/10722/316284
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorZhou, H-
dc.contributor.authorLu, C-
dc.contributor.authorYang, S-
dc.contributor.authorHan, X-
dc.contributor.authorYu, Y-
dc.date.accessioned2022-09-02T06:08:46Z-
dc.date.available2022-09-02T06:08:46Z-
dc.date.issued2021-
dc.identifier.citationICCV Workshop on Deep Multi-Task Learning in Computer Vision (Virtual), Montreal, QC, Canada, October 11-17, 2021. In Proceedings: 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW 2021), p. 3479-3489-
dc.identifier.urihttp://hdl.handle.net/10722/316284-
dc.description.abstractPreserving maximal information is one of principles of designing self-supervised learning methodologies. To reach this goal, contrastive learning adopts an implicit way which is contrasting image pairs. However, we believe it is not fully optimal to simply use the contrastive estimation for preservation. Moreover, it is necessary and complemental to introduce an explicit solution to preserve more information. From this perspective, we introduce Preservational Learning to reconstruct diverse image contexts in order to preserve more information in learned representations. Together with the contrastive loss, we present Preservational Contrastive Representation Learning (PCRL) for learning self-supervised medical representations. PCRL provides very competitive results under the pretraining-finetuning protocol, outperforming both self-supervised and supervised counterparts in 5 classification/segmentation tasks substantially.-
dc.languageeng-
dc.publisherIEEE Computer Society.-
dc.relation.ispartofProceedings: 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW 2021)-
dc.rightsProceedings: 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW 2021). Copyright © IEEE Computer Society.-
dc.subjectRepresentation learning-
dc.subjectComputer vision-
dc.subjectProtocols-
dc.subjectCodes-
dc.subjectComputational modeling-
dc.titlePreservational Learning Improves Self-supervised Medical Image Models by Reconstructing Diverse Contexts-
dc.typeConference_Paper-
dc.identifier.emailYu, Y: yzyu@cs.hku.hk-
dc.identifier.authorityYu, Y=rp01415-
dc.identifier.doi10.1109/ICCV48922.2021.00348-
dc.identifier.hkuros336345-
dc.identifier.spage3479-
dc.identifier.epage3489-
dc.identifier.isiWOS:000797698903068-
dc.publisher.placeUnited States-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats