File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TSP.2023.3267992
- Scopus: eid_2-s2.0-85153793906
- WOS: WOS:000982399100008
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: Sequential Gaussian Processes for Online Learning of Nonstationary Functions
Title | Sequential Gaussian Processes for Online Learning of Nonstationary Functions |
---|---|
Authors | |
Keywords | Gaussian processes online learning sequential Monte Carlo |
Issue Date | 17-Apr-2023 |
Publisher | Institute of Electrical and Electronics Engineers |
Citation | IEEE Transactions on Signal Processing, 2023, v. 71, p. 1539-1550 How to Cite? |
Abstract | Many machine learning problems can be framed in the context of estimating functions, and often these are time -dependent functions that are estimated in real-time as observations arrive. Gaussian processes (GPs) are an attractive choice for modeling real-valued nonlinear functions due to their flexibility and uncertainty quantification. However, the typical GP regression model suffers from several drawbacks: 1) Conventional GP inference scales O(N3) with respect to the number of observations; 2) Updating a GP model sequentially is not trivial; and 3) Covariance kernels typically enforce stationarity constraints on the function, while GPs with non-stationary covariance kernels are often intractable to use in practice. To overcome these issues, we propose a sequential Monte Carlo algorithm to fit infinite mixtures of GPs that capture non-stationary behavior while allowing for online, distributed inference. Our approach empirically improves performance over state-of-the-art methods for online GP estima-tion in the presence of non-stationarity in time-series data. To demonstrate the utility of our proposed online Gaussian process mixture-of-experts approach in applied settings, we show that we can sucessfully implement an optimization algorithm using online Gaussian process bandits. |
Persistent Identifier | http://hdl.handle.net/10722/332008 |
ISSN | 2023 Impact Factor: 4.6 2023 SCImago Journal Rankings: 2.520 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zhang, M M | - |
dc.contributor.author | Dumitrascu, B | - |
dc.contributor.author | Williamson, S A | - |
dc.contributor.author | Engelhardt, B E | - |
dc.date.accessioned | 2023-09-28T05:00:13Z | - |
dc.date.available | 2023-09-28T05:00:13Z | - |
dc.date.issued | 2023-04-17 | - |
dc.identifier.citation | IEEE Transactions on Signal Processing, 2023, v. 71, p. 1539-1550 | - |
dc.identifier.issn | 1053-587X | - |
dc.identifier.uri | http://hdl.handle.net/10722/332008 | - |
dc.description.abstract | <p>Many machine learning problems can be framed in the context of estimating functions, and often these are time -dependent functions that are estimated in real-time as observations arrive. Gaussian processes (GPs) are an attractive choice for modeling real-valued nonlinear functions due to their flexibility and uncertainty quantification. However, the typical GP regression model suffers from several drawbacks: 1) Conventional GP inference scales O(N3) with respect to the number of observations; 2) Updating a GP model sequentially is not trivial; and 3) Covariance kernels typically enforce stationarity constraints on the function, while GPs with non-stationary covariance kernels are often intractable to use in practice. To overcome these issues, we propose a sequential Monte Carlo algorithm to fit infinite mixtures of GPs that capture non-stationary behavior while allowing for online, distributed inference. Our approach empirically improves performance over state-of-the-art methods for online GP estima-tion in the presence of non-stationarity in time-series data. To demonstrate the utility of our proposed online Gaussian process mixture-of-experts approach in applied settings, we show that we can sucessfully implement an optimization algorithm using online Gaussian process bandits.<br></p> | - |
dc.language | eng | - |
dc.publisher | Institute of Electrical and Electronics Engineers | - |
dc.relation.ispartof | IEEE Transactions on Signal Processing | - |
dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
dc.subject | Gaussian processes | - |
dc.subject | online learning | - |
dc.subject | sequential Monte Carlo | - |
dc.title | Sequential Gaussian Processes for Online Learning of Nonstationary Functions | - |
dc.type | Article | - |
dc.identifier.doi | 10.1109/TSP.2023.3267992 | - |
dc.identifier.scopus | eid_2-s2.0-85153793906 | - |
dc.identifier.volume | 71 | - |
dc.identifier.spage | 1539 | - |
dc.identifier.epage | 1550 | - |
dc.identifier.eissn | 1941-0476 | - |
dc.identifier.isi | WOS:000982399100008 | - |
dc.identifier.issnl | 1053-587X | - |