File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Spatiotemporal Satellite Image Fusion Using Deep Convolutional Neural Networks

TitleSpatiotemporal Satellite Image Fusion Using Deep Convolutional Neural Networks
Authors
KeywordsConvolutional neural network (CNN)
nonlinear mapping (NLM)
spatial resolution
temporal resolution
Issue Date2018
Citation
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2018, v. 11, n. 3, p. 821-829 How to Cite?
AbstractWe propose a novel spatiotemporal fusion method based on deep convolutional neural networks (CNNs) under the application background of massive remote sensing data. In the training stage, we build two five-layer CNNs to deal with the problems of complicated correspondence and large spatial resolution gaps between MODIS and Landsat images. Specifically, we first learn a nonlinear mapping CNN between MODIS and low-spatial-resolution (LSR) Landsat images and then learn a super-resolution CNN between LSR Landsat and original Landsat images. In the prediction stage, instead of directly taking the outputs of CNNs as the fusion result, we design a fusion model consisting of high-pass modulation and a weighting strategy to make full use of the information in prior images. Specifically, we first map the input MODIS images to transitional images via the learned nonlinear mapping CNN and further improve the transitional images to LSR Landsat images via the fusion model; then, via the learned SR CNN, the LSR Landsat images are supersolved to transitional images, which are further improved to Landsat images via the fusion model. Compared with the previous learning-based fusion methods, mainly referring to the sparse-representation-based methods, our CNNs-based spatiotemporal method has the following advantages: 1) automatically extracting effective image features; 2) learning an end-to-end mapping between MODIS and LSR Landsat images; and 3) generating more favorable fusion results. To examine the performance of the proposed fusion method, we conduct experiments on two representative Landsat-MODIS datasets by comparing with the sparse-representation-based spatiotemporal fusion model. The quantitative evaluations on all possible prediction dates and the comparison of fusion results on one key date in both visual effect and quantitative evaluations demonstrate that the proposed method can generate more accurate fusion results.
Persistent Identifierhttp://hdl.handle.net/10722/329495
ISSN
2023 Impact Factor: 4.7
2023 SCImago Journal Rankings: 1.434
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorSong, Huihui-
dc.contributor.authorLiu, Qingshan-
dc.contributor.authorWang, Guojie-
dc.contributor.authorHang, Renlong-
dc.contributor.authorHuang, Bo-
dc.date.accessioned2023-08-09T03:33:12Z-
dc.date.available2023-08-09T03:33:12Z-
dc.date.issued2018-
dc.identifier.citationIEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2018, v. 11, n. 3, p. 821-829-
dc.identifier.issn1939-1404-
dc.identifier.urihttp://hdl.handle.net/10722/329495-
dc.description.abstractWe propose a novel spatiotemporal fusion method based on deep convolutional neural networks (CNNs) under the application background of massive remote sensing data. In the training stage, we build two five-layer CNNs to deal with the problems of complicated correspondence and large spatial resolution gaps between MODIS and Landsat images. Specifically, we first learn a nonlinear mapping CNN between MODIS and low-spatial-resolution (LSR) Landsat images and then learn a super-resolution CNN between LSR Landsat and original Landsat images. In the prediction stage, instead of directly taking the outputs of CNNs as the fusion result, we design a fusion model consisting of high-pass modulation and a weighting strategy to make full use of the information in prior images. Specifically, we first map the input MODIS images to transitional images via the learned nonlinear mapping CNN and further improve the transitional images to LSR Landsat images via the fusion model; then, via the learned SR CNN, the LSR Landsat images are supersolved to transitional images, which are further improved to Landsat images via the fusion model. Compared with the previous learning-based fusion methods, mainly referring to the sparse-representation-based methods, our CNNs-based spatiotemporal method has the following advantages: 1) automatically extracting effective image features; 2) learning an end-to-end mapping between MODIS and LSR Landsat images; and 3) generating more favorable fusion results. To examine the performance of the proposed fusion method, we conduct experiments on two representative Landsat-MODIS datasets by comparing with the sparse-representation-based spatiotemporal fusion model. The quantitative evaluations on all possible prediction dates and the comparison of fusion results on one key date in both visual effect and quantitative evaluations demonstrate that the proposed method can generate more accurate fusion results.-
dc.languageeng-
dc.relation.ispartofIEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing-
dc.subjectConvolutional neural network (CNN)-
dc.subjectnonlinear mapping (NLM)-
dc.subjectspatial resolution-
dc.subjecttemporal resolution-
dc.titleSpatiotemporal Satellite Image Fusion Using Deep Convolutional Neural Networks-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/JSTARS.2018.2797894-
dc.identifier.scopuseid_2-s2.0-85042131669-
dc.identifier.volume11-
dc.identifier.issue3-
dc.identifier.spage821-
dc.identifier.epage829-
dc.identifier.eissn2151-1535-
dc.identifier.isiWOS:000427425000012-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats