File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Semi-supervised Skin Lesion Segmentation via Transformation Consistent Self-ensembling Model

TitleSemi-supervised Skin Lesion Segmentation via Transformation Consistent Self-ensembling Model
Authors
Issue Date2018
Citation
British Machine Vision Conference 2018 (BMVC 2018), Newcastle upon Tyne, UK, 3-6 September 2018 How to Cite?
AbstractAutomatic skin lesion segmentation on dermoscopic images is an essential component in computer-aided diagnosis of melanoma. Recently, many fully supervised deep learning based methods have been proposed for automatic skin lesion segmentation. However, these approaches require massive pixel-wise annotation from experienced dermatologists, which is very costly and time-consuming. In this paper, we present a novel semi-supervised method for skin lesion segmentation, where the network is optimized by the weighted combination of a common supervised loss for labeled inputs only and a regularization loss for both labeled and unlabeled data. To utilize the unlabeled data, our method encourages the consistent predictions of the network-in-training for the same input under different regularizations. Aiming for the semi-supervised segmentation problem, we enhance the effect of regularization for pixel-level predictions by introducing a transformation, including rotation and flipping, consistent scheme in our self-ensembling model. With only 300 labeled training samples, our method sets a new record on the benchmark of the International Skin Imaging Collaboration (ISIC) 2017 skin lesion segmentation challenge. Such a result clearly surpasses fully-supervised state-of-the-arts that are trained with 2000 labeled data.
Persistent Identifierhttp://hdl.handle.net/10722/299623

 

DC FieldValueLanguage
dc.contributor.authorLi, Xiaomeng-
dc.contributor.authorYu, Lequan-
dc.contributor.authorChen, Hao-
dc.contributor.authorFu, Chi Wing-
dc.contributor.authorHeng, Pheng Ann-
dc.date.accessioned2021-05-21T03:34:48Z-
dc.date.available2021-05-21T03:34:48Z-
dc.date.issued2018-
dc.identifier.citationBritish Machine Vision Conference 2018 (BMVC 2018), Newcastle upon Tyne, UK, 3-6 September 2018-
dc.identifier.urihttp://hdl.handle.net/10722/299623-
dc.description.abstractAutomatic skin lesion segmentation on dermoscopic images is an essential component in computer-aided diagnosis of melanoma. Recently, many fully supervised deep learning based methods have been proposed for automatic skin lesion segmentation. However, these approaches require massive pixel-wise annotation from experienced dermatologists, which is very costly and time-consuming. In this paper, we present a novel semi-supervised method for skin lesion segmentation, where the network is optimized by the weighted combination of a common supervised loss for labeled inputs only and a regularization loss for both labeled and unlabeled data. To utilize the unlabeled data, our method encourages the consistent predictions of the network-in-training for the same input under different regularizations. Aiming for the semi-supervised segmentation problem, we enhance the effect of regularization for pixel-level predictions by introducing a transformation, including rotation and flipping, consistent scheme in our self-ensembling model. With only 300 labeled training samples, our method sets a new record on the benchmark of the International Skin Imaging Collaboration (ISIC) 2017 skin lesion segmentation challenge. Such a result clearly surpasses fully-supervised state-of-the-arts that are trained with 2000 labeled data.-
dc.languageeng-
dc.relation.ispartofBritish Machine Vision Conference (BMVC)-
dc.titleSemi-supervised Skin Lesion Segmentation via Transformation Consistent Self-ensembling Model-
dc.typeConference_Paper-
dc.description.naturelink_to_OA_fulltext-
dc.identifier.scopuseid_2-s2.0-85084017407-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats