File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Spatio-spectral fusion of satellite images based on dictionary-pair learning

TitleSpatio-spectral fusion of satellite images based on dictionary-pair learning
Authors
KeywordsDictionary-pair learning
High spatial resolution
High spectral resolution
Sparse non-negative matrix factorization
Spatio-spectral fusion
Issue Date2014
Citation
Information Fusion, 2014, v. 18, n. 1, p. 148-160 How to Cite?
AbstractThis paper proposes a novel spatial and spectral fusion method for satellite multispectral and hyperspectral (or high-spectral) images based on dictionary-pair learning. By combining the spectral information from sensors with low spatial resolution but high spectral resolution (LSHS) and the spatial information from sensors with high spatial resolution but low spectral resolution (HSLS), this method aims to generate fused data with both high spatial and spectral resolution. Based on the sparse non-negative matrix factorization technique, this method first extracts spectral bases of LSHS and HSLS images by making full use of the rich spectral information in LSHS data. The spectral bases of these two categories data then formulate a dictionary-pair due to their correspondence in representing each pixel spectra of LSHS data and HSLS data, respectively. Subsequently, the LSHS image is spatial unmixed by representing the HSLS image with respect to the corresponding learned dictionary to derive its representation coefficients. Combining the spectral bases of LSHS data and the representation coefficients of HSLS data, fused data are finally derived which are characterized by the spectral resolution of LSHS data and the spatial resolution of HSLS data. The experiments are carried out by comparing the proposed method with two representative methods on both simulation data and actual satellite images, including the fusion of Landsat/ETM+ and Aqua/MODIS data and the fusion of EO-1/Hyperion and SPOT5/HRG multispectral images. By visually comparing the fusion results and quantitatively evaluating them in term of several measurement indices, it can be concluded that the proposed method is effective in preserving both the spectral information and spatial details and performs better than the comparison approaches. © 2013 Elsevier B.V. All rights reserved.
Persistent Identifierhttp://hdl.handle.net/10722/329300
ISSN
2021 Impact Factor: 17.564
2020 SCImago Journal Rankings: 2.776
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorSong, Huihui-
dc.contributor.authorHuang, Bo-
dc.contributor.authorZhang, Kaihua-
dc.contributor.authorZhang, Hankui-
dc.date.accessioned2023-08-09T03:31:49Z-
dc.date.available2023-08-09T03:31:49Z-
dc.date.issued2014-
dc.identifier.citationInformation Fusion, 2014, v. 18, n. 1, p. 148-160-
dc.identifier.issn1566-2535-
dc.identifier.urihttp://hdl.handle.net/10722/329300-
dc.description.abstractThis paper proposes a novel spatial and spectral fusion method for satellite multispectral and hyperspectral (or high-spectral) images based on dictionary-pair learning. By combining the spectral information from sensors with low spatial resolution but high spectral resolution (LSHS) and the spatial information from sensors with high spatial resolution but low spectral resolution (HSLS), this method aims to generate fused data with both high spatial and spectral resolution. Based on the sparse non-negative matrix factorization technique, this method first extracts spectral bases of LSHS and HSLS images by making full use of the rich spectral information in LSHS data. The spectral bases of these two categories data then formulate a dictionary-pair due to their correspondence in representing each pixel spectra of LSHS data and HSLS data, respectively. Subsequently, the LSHS image is spatial unmixed by representing the HSLS image with respect to the corresponding learned dictionary to derive its representation coefficients. Combining the spectral bases of LSHS data and the representation coefficients of HSLS data, fused data are finally derived which are characterized by the spectral resolution of LSHS data and the spatial resolution of HSLS data. The experiments are carried out by comparing the proposed method with two representative methods on both simulation data and actual satellite images, including the fusion of Landsat/ETM+ and Aqua/MODIS data and the fusion of EO-1/Hyperion and SPOT5/HRG multispectral images. By visually comparing the fusion results and quantitatively evaluating them in term of several measurement indices, it can be concluded that the proposed method is effective in preserving both the spectral information and spatial details and performs better than the comparison approaches. © 2013 Elsevier B.V. All rights reserved.-
dc.languageeng-
dc.relation.ispartofInformation Fusion-
dc.subjectDictionary-pair learning-
dc.subjectHigh spatial resolution-
dc.subjectHigh spectral resolution-
dc.subjectSparse non-negative matrix factorization-
dc.subjectSpatio-spectral fusion-
dc.titleSpatio-spectral fusion of satellite images based on dictionary-pair learning-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1016/j.inffus.2013.08.005-
dc.identifier.scopuseid_2-s2.0-84892364431-
dc.identifier.volume18-
dc.identifier.issue1-
dc.identifier.spage148-
dc.identifier.epage160-
dc.identifier.isiWOS:000331347100014-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats