File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1007/978-3-030-87196-3_13
- WOS: WOS:000712020700013
Supplementary
-
Citations:
- Web of Science: 0
- Appears in Collections:
Conference Paper: Self-Supervised Correction Learning for Semi-Supervised Biomedical Image Segmentation
Title | Self-Supervised Correction Learning for Semi-Supervised Biomedical Image Segmentation |
---|---|
Authors | |
Keywords | Brain lesion segmentation Data augmentation Convolutional neural network |
Issue Date | 2021 |
Publisher | Springer. |
Citation | International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Strasbourg, France, September 27–October 1, 2021, p. 134-144 How to Cite? |
Abstract | Biomedical image segmentation plays a significant role in computer-aided diagnosis. However, existing CNN based methods rely heavily on massive manual annotations, which are very expensive and require huge human resources. In this work, we adopt a coarse-to-fine strategy and propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation. Specifically, we design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting, respectively. In the first phase, only the segmentation branch is used to obtain a relatively rough segmentation result. In the second step, we mask the detected lesion regions on the original image based on the initial segmentation map, and send it together with the original image into the network again to simultaneously perform inpainting and segmentation separately. For labeled data, this process is supervised by the segmentation annotations, and for unlabeled data, it is guided by the inpainting loss of masked lesion regions. Since the two tasks rely on similar feature information, the unlabeled data effectively enhances the representation of the network to the lesion regions and further improves the segmentation performance. Moreover, a gated feature fusion (GFF) module is designed to incorporate the complementary features from the two tasks. Experiments on three medical image segmentation datasets for different tasks including polyp, skin lesion and fundus optic disc segmentation well demonstrate the outstanding performance of our method compared with other semi-supervised approaches. |
Persistent Identifier | http://hdl.handle.net/10722/316366 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zhang, R | - |
dc.contributor.author | Liu, S | - |
dc.contributor.author | Yu, Y | - |
dc.contributor.author | Li, G | - |
dc.date.accessioned | 2022-09-02T06:10:13Z | - |
dc.date.available | 2022-09-02T06:10:13Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Strasbourg, France, September 27–October 1, 2021, p. 134-144 | - |
dc.identifier.uri | http://hdl.handle.net/10722/316366 | - |
dc.description.abstract | Biomedical image segmentation plays a significant role in computer-aided diagnosis. However, existing CNN based methods rely heavily on massive manual annotations, which are very expensive and require huge human resources. In this work, we adopt a coarse-to-fine strategy and propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation. Specifically, we design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting, respectively. In the first phase, only the segmentation branch is used to obtain a relatively rough segmentation result. In the second step, we mask the detected lesion regions on the original image based on the initial segmentation map, and send it together with the original image into the network again to simultaneously perform inpainting and segmentation separately. For labeled data, this process is supervised by the segmentation annotations, and for unlabeled data, it is guided by the inpainting loss of masked lesion regions. Since the two tasks rely on similar feature information, the unlabeled data effectively enhances the representation of the network to the lesion regions and further improves the segmentation performance. Moreover, a gated feature fusion (GFF) module is designed to incorporate the complementary features from the two tasks. Experiments on three medical image segmentation datasets for different tasks including polyp, skin lesion and fundus optic disc segmentation well demonstrate the outstanding performance of our method compared with other semi-supervised approaches. | - |
dc.language | eng | - |
dc.publisher | Springer. | - |
dc.relation.ispartof | Medical Image Computing and Computer Assisted Intervention – MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part II | - |
dc.subject | Brain lesion segmentation | - |
dc.subject | Data augmentation | - |
dc.subject | Convolutional neural network | - |
dc.title | Self-Supervised Correction Learning for Semi-Supervised Biomedical Image Segmentation | - |
dc.type | Conference_Paper | - |
dc.identifier.email | Yu, Y: yzyu@cs.hku.hk | - |
dc.identifier.authority | Yu, Y=rp01415 | - |
dc.identifier.doi | 10.1007/978-3-030-87196-3_13 | - |
dc.identifier.hkuros | 336354 | - |
dc.identifier.spage | 134 | - |
dc.identifier.epage | 144 | - |
dc.identifier.isi | WOS:000712020700013 | - |
dc.publisher.place | Switzerland | - |