File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Thick Cloud Removal Under Land Cover Changes Using Multisource Satellite Imagery and a Spatiotemporal Attention Network

TitleThick Cloud Removal Under Land Cover Changes Using Multisource Satellite Imagery and a Spatiotemporal Attention Network
Authors
KeywordsCloud removal
deep learning
land cover change
Sentinel imagery
spatiotemporal attention network (STAN)
Issue Date2023
Citation
IEEE Transactions on Geoscience and Remote Sensing, 2023, v. 61, article no. 5601218 How to Cite?
AbstractRemote sensing satellites provide observations of the Earth's surface, which are crucial data for applications and analyses in several fields, including agriculture, environmental protection, and sustainable development. However, the wide and frequent occurrence of clouds highly undermines the quality and availability of usable optical data, particularly low-temporal-resolution data. Although deep learning techniques have facilitated recent progress in cloud removal algorithms, thick cloud removal under changing land cover remains challenging. In this study, we propose a framework to remove thick clouds, thin clouds, and cloud shadow from Sentinel-2 images. The framework integrates the spatial detail in a Sentinel-2 reference image and the coarse spectral pattern in a near-target-date Sentinel-3 image as spatiotemporal guidance to generate missing data with land cover change information in a cloudy Sentinel-2 image. The reconstruction is performed using a spatiotemporal attention network (STAN) that adopts the self-attention mechanism, residual learning, and high-pass features to enhance feature extraction from the multisource data. The experimental results show that STAN outperforms residual u-net (ResUnet), cloud-removal network (CRN), convolutional neural network-based spatial-temporal-spectral (STS-CNN), and DSen2-CR in terms of multiple quantitative metrics and visual characteristics. The comparative experiment proves that the integration of Sentinel-3 data improves the cloud removal performance, especially in areas with distinctive and heterogeneous land cover changes under large-scale cloud cover. The experimental results also indicate high generalizability of STAN when the Sentinel-3 image is far from the target date, when transferring features to cloud removal for new images, and even with limited training data that simulates severe cloud cover.
Persistent Identifierhttp://hdl.handle.net/10722/329921
ISSN
2021 Impact Factor: 8.125
2020 SCImago Journal Rankings: 2.141
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorLiu, Hao-
dc.contributor.authorHuang, Bo-
dc.contributor.authorCai, Jiajun-
dc.date.accessioned2023-08-09T03:36:27Z-
dc.date.available2023-08-09T03:36:27Z-
dc.date.issued2023-
dc.identifier.citationIEEE Transactions on Geoscience and Remote Sensing, 2023, v. 61, article no. 5601218-
dc.identifier.issn0196-2892-
dc.identifier.urihttp://hdl.handle.net/10722/329921-
dc.description.abstractRemote sensing satellites provide observations of the Earth's surface, which are crucial data for applications and analyses in several fields, including agriculture, environmental protection, and sustainable development. However, the wide and frequent occurrence of clouds highly undermines the quality and availability of usable optical data, particularly low-temporal-resolution data. Although deep learning techniques have facilitated recent progress in cloud removal algorithms, thick cloud removal under changing land cover remains challenging. In this study, we propose a framework to remove thick clouds, thin clouds, and cloud shadow from Sentinel-2 images. The framework integrates the spatial detail in a Sentinel-2 reference image and the coarse spectral pattern in a near-target-date Sentinel-3 image as spatiotemporal guidance to generate missing data with land cover change information in a cloudy Sentinel-2 image. The reconstruction is performed using a spatiotemporal attention network (STAN) that adopts the self-attention mechanism, residual learning, and high-pass features to enhance feature extraction from the multisource data. The experimental results show that STAN outperforms residual u-net (ResUnet), cloud-removal network (CRN), convolutional neural network-based spatial-temporal-spectral (STS-CNN), and DSen2-CR in terms of multiple quantitative metrics and visual characteristics. The comparative experiment proves that the integration of Sentinel-3 data improves the cloud removal performance, especially in areas with distinctive and heterogeneous land cover changes under large-scale cloud cover. The experimental results also indicate high generalizability of STAN when the Sentinel-3 image is far from the target date, when transferring features to cloud removal for new images, and even with limited training data that simulates severe cloud cover.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Geoscience and Remote Sensing-
dc.subjectCloud removal-
dc.subjectdeep learning-
dc.subjectland cover change-
dc.subjectSentinel imagery-
dc.subjectspatiotemporal attention network (STAN)-
dc.titleThick Cloud Removal Under Land Cover Changes Using Multisource Satellite Imagery and a Spatiotemporal Attention Network-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TGRS.2023.3236106-
dc.identifier.scopuseid_2-s2.0-85147232301-
dc.identifier.volume61-
dc.identifier.spagearticle no. 5601218-
dc.identifier.epagearticle no. 5601218-
dc.identifier.eissn1558-0644-
dc.identifier.isiWOS:000992306700012-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats