File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Integration of Convolutional Neural Networks and Object-Based Post-Classification Refinement for Land Use and Land Cover Mapping with Optical and SAR Data

TitleIntegration of Convolutional Neural Networks and Object-Based Post-Classification Refinement for Land Use and Land Cover Mapping with Optical and SAR Data
Authors
Keywordsobject-based post-classification refinement (OBPR)
convolutional neural network (CNN)
synthetic aperture radar (SAR)
land use and land cover
object-based image analysis (OBIA)
Issue Date2019
PublisherMDPI AG. The Journal's web site is located at http://www.mdpi.com/journal/remotesensing/
Citation
Remote Sensing, 2019, v. 11 n. 6, p. article no. 690 How to Cite?
AbstractObject-based image analysis (OBIA) has been widely used for land use and land cover (LULC) mapping using optical and synthetic aperture radar (SAR) images because it can utilize spatial information, reduce the effect of salt and pepper, and delineate LULC boundaries. With recent advances in machine learning, convolutional neural networks (CNNs) have become state-of-the-art algorithms. However, CNNs cannot be easily integrated with OBIA because the processing unit of CNNs is a rectangular image, whereas that of OBIA is an irregular image object. To obtain object-based thematic maps, this study developed a new method that integrates object-based post-classification refinement (OBPR) and CNNs for LULC mapping using Sentinel optical and SAR data. After producing the classification map by CNN, each image object was labeled with the most frequent land cover category of its pixels. The proposed method was tested on the optical-SAR Sentinel Guangzhou dataset with 10 m spatial resolution, the optical-SAR Zhuhai-Macau local climate zones (LCZ) dataset with 100 m spatial resolution, and a hyperspectral benchmark the University of Pavia with 1.3 m spatial resolution. It outperformed OBIA support vector machine (SVM) and random forest (RF). SVM and RF could benefit more from the combined use of optical and SAR data compared with CNN, whereas spatial information learned by CNN was very effective for classification. With the ability to extract spatial features and maintain object boundaries, the proposed method considerably improved the classification accuracy of urban ground targets. It achieved overall accuracy (OA) of 95.33% for the Sentinel Guangzhou dataset, OA of 77.64% for the Zhuhai-Macau LCZ dataset, and OA of 95.70% for the University of Pavia dataset with only 10 labeled samples per class.
Persistent Identifierhttp://hdl.handle.net/10722/277146
ISSN
2023 Impact Factor: 4.2
2023 SCImago Journal Rankings: 1.091
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorLiu, S-
dc.contributor.authorQi, Z-
dc.contributor.authorLi, X-
dc.contributor.authorYeh, AGO-
dc.date.accessioned2019-09-20T08:45:28Z-
dc.date.available2019-09-20T08:45:28Z-
dc.date.issued2019-
dc.identifier.citationRemote Sensing, 2019, v. 11 n. 6, p. article no. 690-
dc.identifier.issn2072-4292-
dc.identifier.urihttp://hdl.handle.net/10722/277146-
dc.description.abstractObject-based image analysis (OBIA) has been widely used for land use and land cover (LULC) mapping using optical and synthetic aperture radar (SAR) images because it can utilize spatial information, reduce the effect of salt and pepper, and delineate LULC boundaries. With recent advances in machine learning, convolutional neural networks (CNNs) have become state-of-the-art algorithms. However, CNNs cannot be easily integrated with OBIA because the processing unit of CNNs is a rectangular image, whereas that of OBIA is an irregular image object. To obtain object-based thematic maps, this study developed a new method that integrates object-based post-classification refinement (OBPR) and CNNs for LULC mapping using Sentinel optical and SAR data. After producing the classification map by CNN, each image object was labeled with the most frequent land cover category of its pixels. The proposed method was tested on the optical-SAR Sentinel Guangzhou dataset with 10 m spatial resolution, the optical-SAR Zhuhai-Macau local climate zones (LCZ) dataset with 100 m spatial resolution, and a hyperspectral benchmark the University of Pavia with 1.3 m spatial resolution. It outperformed OBIA support vector machine (SVM) and random forest (RF). SVM and RF could benefit more from the combined use of optical and SAR data compared with CNN, whereas spatial information learned by CNN was very effective for classification. With the ability to extract spatial features and maintain object boundaries, the proposed method considerably improved the classification accuracy of urban ground targets. It achieved overall accuracy (OA) of 95.33% for the Sentinel Guangzhou dataset, OA of 77.64% for the Zhuhai-Macau LCZ dataset, and OA of 95.70% for the University of Pavia dataset with only 10 labeled samples per class.-
dc.languageeng-
dc.publisherMDPI AG. The Journal's web site is located at http://www.mdpi.com/journal/remotesensing/-
dc.relation.ispartofRemote Sensing-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectobject-based post-classification refinement (OBPR)-
dc.subjectconvolutional neural network (CNN)-
dc.subjectsynthetic aperture radar (SAR)-
dc.subjectland use and land cover-
dc.subjectobject-based image analysis (OBIA)-
dc.titleIntegration of Convolutional Neural Networks and Object-Based Post-Classification Refinement for Land Use and Land Cover Mapping with Optical and SAR Data-
dc.typeArticle-
dc.identifier.emailYeh, AGO: hdxugoy@hkucc.hku.hk-
dc.identifier.authorityYeh, AGO=rp01033-
dc.description.naturepublished_or_final_version-
dc.identifier.doi10.3390/rs11060690-
dc.identifier.scopuseid_2-s2.0-85071579171-
dc.identifier.hkuros305879-
dc.identifier.volume11-
dc.identifier.issue6-
dc.identifier.spagearticle no. 690-
dc.identifier.epagearticle no. 690-
dc.identifier.isiWOS:000465615300067-
dc.publisher.placeSwitzerland-
dc.identifier.issnl2072-4292-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats