File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TNNLS.2014.2334137
- Scopus: eid_2-s2.0-85027945338
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: Generalized multiple kernel learning with data-dependent priors
Title | Generalized multiple kernel learning with data-dependent priors |
---|---|
Authors | |
Keywords | Data fusion Dirty data Missing views Multiple kernel learning Partial correspondence Semisupervised learning |
Issue Date | 2015 |
Citation | IEEE Transactions on Neural Networks and Learning Systems, 2015, v. 26, n. 6, p. 1134-1148 How to Cite? |
Abstract | Multiple kernel learning (MKL) and classifier ensemble are two mainstream methods for solving learning problems in which some sets of features/views are more informative than others, or the features/views within a given set are inconsistent. In this paper, we first present a novel probabilistic interpretation of MKL such that maximum entropy discrimination with a noninformative prior over multiple views is equivalent to the formulation of MKL. Instead of using the noninformative prior, we introduce a novel data-dependent prior based on an ensemble of kernel predictors, which enhances the prediction performance of MKL by leveraging the merits of the classifier ensemble. With the proposed probabilistic framework of MKL, we propose a hierarchical Bayesian model to learn the proposed data-dependent prior and classification model simultaneously. The resultant problem is convex and other information (e.g., instances with either missing views or missing labels) can be seamlessly incorporated into the data-dependent priors. Furthermore, a variety of existing MKL models can be recovered under the proposed MKL framework and can be readily extended to incorporate these priors. Extensive experiments demonstrate the benefits of our proposed framework in supervised and semisupervised settings, as well as in tasks with partial correspondence among multiple views. |
Persistent Identifier | http://hdl.handle.net/10722/345089 |
ISSN | 2023 Impact Factor: 10.2 2023 SCImago Journal Rankings: 4.170 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Mao, Qi | - |
dc.contributor.author | Tsang, Ivor W. | - |
dc.contributor.author | Gao, Shenghua | - |
dc.contributor.author | Wang, Li | - |
dc.date.accessioned | 2024-08-15T09:25:09Z | - |
dc.date.available | 2024-08-15T09:25:09Z | - |
dc.date.issued | 2015 | - |
dc.identifier.citation | IEEE Transactions on Neural Networks and Learning Systems, 2015, v. 26, n. 6, p. 1134-1148 | - |
dc.identifier.issn | 2162-237X | - |
dc.identifier.uri | http://hdl.handle.net/10722/345089 | - |
dc.description.abstract | Multiple kernel learning (MKL) and classifier ensemble are two mainstream methods for solving learning problems in which some sets of features/views are more informative than others, or the features/views within a given set are inconsistent. In this paper, we first present a novel probabilistic interpretation of MKL such that maximum entropy discrimination with a noninformative prior over multiple views is equivalent to the formulation of MKL. Instead of using the noninformative prior, we introduce a novel data-dependent prior based on an ensemble of kernel predictors, which enhances the prediction performance of MKL by leveraging the merits of the classifier ensemble. With the proposed probabilistic framework of MKL, we propose a hierarchical Bayesian model to learn the proposed data-dependent prior and classification model simultaneously. The resultant problem is convex and other information (e.g., instances with either missing views or missing labels) can be seamlessly incorporated into the data-dependent priors. Furthermore, a variety of existing MKL models can be recovered under the proposed MKL framework and can be readily extended to incorporate these priors. Extensive experiments demonstrate the benefits of our proposed framework in supervised and semisupervised settings, as well as in tasks with partial correspondence among multiple views. | - |
dc.language | eng | - |
dc.relation.ispartof | IEEE Transactions on Neural Networks and Learning Systems | - |
dc.subject | Data fusion | - |
dc.subject | Dirty data | - |
dc.subject | Missing views | - |
dc.subject | Multiple kernel learning | - |
dc.subject | Partial correspondence | - |
dc.subject | Semisupervised learning | - |
dc.title | Generalized multiple kernel learning with data-dependent priors | - |
dc.type | Article | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/TNNLS.2014.2334137 | - |
dc.identifier.scopus | eid_2-s2.0-85027945338 | - |
dc.identifier.volume | 26 | - |
dc.identifier.issue | 6 | - |
dc.identifier.spage | 1134 | - |
dc.identifier.epage | 1148 | - |
dc.identifier.eissn | 2162-2388 | - |