File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TCSVT.2018.2869875
- Scopus: eid_2-s2.0-85053288052
- WOS: WOS:000489738900021
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: Accelerating Flexible Manifold Embedding for Scalable Semi-Supervised Learning
Title | Accelerating Flexible Manifold Embedding for Scalable Semi-Supervised Learning |
---|---|
Authors | |
Keywords | large-scale machine learning manifold embedding Semi-supervised learning |
Issue Date | 2019 |
Citation | IEEE Transactions on Circuits and Systems for Video Technology, 2019, v. 29, n. 9, p. 2786-2795 How to Cite? |
Abstract | In this paper, we address the problem of large-scale graph-based semi-supervised learning for multi-class classification. Most existing scalable graph-based semi-supervised learning methods are based on the hard linear constraint or cannot cope with the unseen samples, which limits their applications and learning performance. To this end, we build upon our previous work flexible manifold embedding (FME) [1] and propose two novel linear-complexity algorithms called fast flexible manifold embedding (f-FME) and reduced flexible manifold embedding (r-FME). Both of the proposed methods accelerate FME and inherit its advantages. Specifically, our methods address the hard linear constraint problem by combining a regression residue term and a manifold smoothness term jointly, which naturally provides the prediction model for handling unseen samples. To reduce computational costs, we exploit the underlying relationship between a small number of anchor points and all data points to construct the graph adjacency matrix, which leads to simplified closed-form solutions. The resultant f-FME and r-FME algorithms not only scale linearly in both time and space with respect to the number of training samples but also can effectively utilize information from both labeled and unlabeled data. Experimental results show the effectiveness and scalability of the proposed methods. |
Persistent Identifier | http://hdl.handle.net/10722/321805 |
ISSN | 2023 Impact Factor: 8.3 2023 SCImago Journal Rankings: 2.299 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Qiu, Suo | - |
dc.contributor.author | Nie, Feiping | - |
dc.contributor.author | Xu, Xiangmin | - |
dc.contributor.author | Qing, Chunmei | - |
dc.contributor.author | Xu, Dong | - |
dc.date.accessioned | 2022-11-03T02:21:33Z | - |
dc.date.available | 2022-11-03T02:21:33Z | - |
dc.date.issued | 2019 | - |
dc.identifier.citation | IEEE Transactions on Circuits and Systems for Video Technology, 2019, v. 29, n. 9, p. 2786-2795 | - |
dc.identifier.issn | 1051-8215 | - |
dc.identifier.uri | http://hdl.handle.net/10722/321805 | - |
dc.description.abstract | In this paper, we address the problem of large-scale graph-based semi-supervised learning for multi-class classification. Most existing scalable graph-based semi-supervised learning methods are based on the hard linear constraint or cannot cope with the unseen samples, which limits their applications and learning performance. To this end, we build upon our previous work flexible manifold embedding (FME) [1] and propose two novel linear-complexity algorithms called fast flexible manifold embedding (f-FME) and reduced flexible manifold embedding (r-FME). Both of the proposed methods accelerate FME and inherit its advantages. Specifically, our methods address the hard linear constraint problem by combining a regression residue term and a manifold smoothness term jointly, which naturally provides the prediction model for handling unseen samples. To reduce computational costs, we exploit the underlying relationship between a small number of anchor points and all data points to construct the graph adjacency matrix, which leads to simplified closed-form solutions. The resultant f-FME and r-FME algorithms not only scale linearly in both time and space with respect to the number of training samples but also can effectively utilize information from both labeled and unlabeled data. Experimental results show the effectiveness and scalability of the proposed methods. | - |
dc.language | eng | - |
dc.relation.ispartof | IEEE Transactions on Circuits and Systems for Video Technology | - |
dc.subject | large-scale machine learning | - |
dc.subject | manifold embedding | - |
dc.subject | Semi-supervised learning | - |
dc.title | Accelerating Flexible Manifold Embedding for Scalable Semi-Supervised Learning | - |
dc.type | Article | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/TCSVT.2018.2869875 | - |
dc.identifier.scopus | eid_2-s2.0-85053288052 | - |
dc.identifier.volume | 29 | - |
dc.identifier.issue | 9 | - |
dc.identifier.spage | 2786 | - |
dc.identifier.epage | 2795 | - |
dc.identifier.eissn | 1558-2205 | - |
dc.identifier.isi | WOS:000489738900021 | - |