File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Deep unsupervised active learning via matrix sketching

TitleDeep unsupervised active learning via matrix sketching
Authors
KeywordsData reconstruction
Matrix sketching
Self-supervised learning
Unsupervised active learning
Issue Date2021
Citation
IEEE Transactions on Image Processing, 2021, v. 30, p. 9280-9293 How to Cite?
Abstract—Most existing unsupervised active learning methods aim at minimizing the data reconstruction loss by using the linear models to choose representative samples for manually labeling in an unsupervised setting. Thus these methods often fail in modelling data with complex non-linear structure. To address this issue, we propose a new deep unsupervised Active Learning method for classification tasks, inspired by the idea of Matrix Sketching, called ALMS. Specifically, ALMS leverages a deep auto-encoder to embed data into a latent space, and then describes all the embedded data with a small size sketch to summarize the major characteristics of the data. In contrast to previous approaches that reconstruct the whole data matrix for selecting the representative samples, ALMS aims to select a representative subset of samples to well approximate the sketch, which can preserve the major information of data meanwhile significantly reducing the number of network parameters. This makes our algorithm alleviate the issue of model overfitting and readily cope with large datasets. Actually, the sketch provides a type of self-supervised signal to guide the learning of the model. Moreover, we propose to construct an auxiliary self-supervised task by classifying real/fake samples, in order to further improve the representation ability of the encoder. We thoroughly evaluate the performance of ALMS on both single-label and multi-label classification tasks, and the results demonstrate its superior performance against the state-of-the-art methods. The code can be found at https://github.com/lrq99/ALMS.
Persistent Identifierhttp://hdl.handle.net/10722/321968
ISSN
2023 Impact Factor: 10.8
2023 SCImago Journal Rankings: 3.556
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorLi, Changsheng-
dc.contributor.authorLi, Rongqing-
dc.contributor.authorYuan, Ye-
dc.contributor.authorWang, Guoren-
dc.contributor.authorXu, Dong-
dc.date.accessioned2022-11-03T02:22:42Z-
dc.date.available2022-11-03T02:22:42Z-
dc.date.issued2021-
dc.identifier.citationIEEE Transactions on Image Processing, 2021, v. 30, p. 9280-9293-
dc.identifier.issn1057-7149-
dc.identifier.urihttp://hdl.handle.net/10722/321968-
dc.description.abstract—Most existing unsupervised active learning methods aim at minimizing the data reconstruction loss by using the linear models to choose representative samples for manually labeling in an unsupervised setting. Thus these methods often fail in modelling data with complex non-linear structure. To address this issue, we propose a new deep unsupervised Active Learning method for classification tasks, inspired by the idea of Matrix Sketching, called ALMS. Specifically, ALMS leverages a deep auto-encoder to embed data into a latent space, and then describes all the embedded data with a small size sketch to summarize the major characteristics of the data. In contrast to previous approaches that reconstruct the whole data matrix for selecting the representative samples, ALMS aims to select a representative subset of samples to well approximate the sketch, which can preserve the major information of data meanwhile significantly reducing the number of network parameters. This makes our algorithm alleviate the issue of model overfitting and readily cope with large datasets. Actually, the sketch provides a type of self-supervised signal to guide the learning of the model. Moreover, we propose to construct an auxiliary self-supervised task by classifying real/fake samples, in order to further improve the representation ability of the encoder. We thoroughly evaluate the performance of ALMS on both single-label and multi-label classification tasks, and the results demonstrate its superior performance against the state-of-the-art methods. The code can be found at https://github.com/lrq99/ALMS.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Image Processing-
dc.subjectData reconstruction-
dc.subjectMatrix sketching-
dc.subjectSelf-supervised learning-
dc.subjectUnsupervised active learning-
dc.titleDeep unsupervised active learning via matrix sketching-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TIP.2021.3124317-
dc.identifier.pmid34739378-
dc.identifier.scopuseid_2-s2.0-85118666282-
dc.identifier.volume30-
dc.identifier.spage9280-
dc.identifier.epage9293-
dc.identifier.eissn1941-0042-
dc.identifier.isiWOS:000717767800003-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats