File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Progressive Self-Supervised Pretraining for Hyperspectral Image Classification

TitleProgressive Self-Supervised Pretraining for Hyperspectral Image Classification
Authors
KeywordsHyperspectral image (HSI) classification
limited labels
self-supervised learning (SSL)
transfer learning
Issue Date6-May-2024
PublisherIEEE
Citation
IEEE Transactions on Geoscience and Remote Sensing, 2024, v. 62, p. 1-13 How to Cite?
AbstractSelf-supervised learning has demonstrated considerable success in hyperspectral image (HSI) classification when limited labeled data are available. However, inherent dissimilarities among HSIs require self-supervised pretraining from scratch for each HSI dataset. Pretraining on a large amount of unlabeled data can be time-consuming. In addition, the poor quality of some HSIs can limit the performance of self-supervised learning (SSL) algorithms. To address these issues, we propose to enhance self-supervised pretraining on HSIs with transfer learning. We introduce a progressive self-supervised pretraining (PSP) framework that acquires strong initialization for the final pretraining on the target HSI dataset by sequentially performing self-supervised pretraining on datasets that are increasingly similar to the target HSI, specifically, first on a large general vision dataset and then on a related HSI dataset. This sequential strategy enables the model to progressively learn from domain-general vision knowledge to target-specific hyperspectral knowledge. To mitigate the catastrophic forgetting in sequential training, we develop a regularization method, called self-supervised elastic weight consolidation (SS-EWC), to impose adaptive constraints on the changes to model parameters. Thorough classification experiments on various HSI datasets demonstrate that our framework significantly and consistently improves the self-supervised pretraining on HSIs in terms of both convergence speed and representation quality. Furthermore, our framework exhibits high generalizability and can be applied to various SSL algorithms. Transfer learning continues to prove its usefulness in self-supervised settings.
Persistent Identifierhttp://hdl.handle.net/10722/350906
ISSN
2023 Impact Factor: 7.5
2023 SCImago Journal Rankings: 2.403

 

DC FieldValueLanguage
dc.contributor.authorGuan, Peiyan-
dc.contributor.authorLam, Edmund Y.-
dc.date.accessioned2024-11-06T00:30:34Z-
dc.date.available2024-11-06T00:30:34Z-
dc.date.issued2024-05-06-
dc.identifier.citationIEEE Transactions on Geoscience and Remote Sensing, 2024, v. 62, p. 1-13-
dc.identifier.issn0196-2892-
dc.identifier.urihttp://hdl.handle.net/10722/350906-
dc.description.abstractSelf-supervised learning has demonstrated considerable success in hyperspectral image (HSI) classification when limited labeled data are available. However, inherent dissimilarities among HSIs require self-supervised pretraining from scratch for each HSI dataset. Pretraining on a large amount of unlabeled data can be time-consuming. In addition, the poor quality of some HSIs can limit the performance of self-supervised learning (SSL) algorithms. To address these issues, we propose to enhance self-supervised pretraining on HSIs with transfer learning. We introduce a progressive self-supervised pretraining (PSP) framework that acquires strong initialization for the final pretraining on the target HSI dataset by sequentially performing self-supervised pretraining on datasets that are increasingly similar to the target HSI, specifically, first on a large general vision dataset and then on a related HSI dataset. This sequential strategy enables the model to progressively learn from domain-general vision knowledge to target-specific hyperspectral knowledge. To mitigate the catastrophic forgetting in sequential training, we develop a regularization method, called self-supervised elastic weight consolidation (SS-EWC), to impose adaptive constraints on the changes to model parameters. Thorough classification experiments on various HSI datasets demonstrate that our framework significantly and consistently improves the self-supervised pretraining on HSIs in terms of both convergence speed and representation quality. Furthermore, our framework exhibits high generalizability and can be applied to various SSL algorithms. Transfer learning continues to prove its usefulness in self-supervised settings.-
dc.languageeng-
dc.publisherIEEE-
dc.relation.ispartofIEEE Transactions on Geoscience and Remote Sensing-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectHyperspectral image (HSI) classification-
dc.subjectlimited labels-
dc.subjectself-supervised learning (SSL)-
dc.subjecttransfer learning-
dc.titleProgressive Self-Supervised Pretraining for Hyperspectral Image Classification -
dc.typeArticle-
dc.identifier.doi10.1109/TGRS.2024.3397740-
dc.identifier.scopuseid_2-s2.0-85192736577-
dc.identifier.volume62-
dc.identifier.spage1-
dc.identifier.epage13-
dc.identifier.eissn1558-0644-
dc.identifier.issnl0196-2892-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats