File Download
There are no files associated with this item.
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: GPT-ST: Generative Pre-Training of Spatio-Temporal Graph Neural Networks
| Title | GPT-ST: Generative Pre-Training of Spatio-Temporal Graph Neural Networks |
|---|---|
| Authors | |
| Issue Date | 2023 |
| Citation | Advances in Neural Information Processing Systems, 2023, v. 36 How to Cite? |
| Abstract | In recent years, there has been a rapid development of spatio-temporal prediction techniques in response to the increasing demands of traffic management and travel planning. While advanced end-to-end models have achieved notable success in improving predictive performance, their integration and expansion pose significant challenges. This work aims to address these challenges by introducing a spatio-temporal pre-training framework that seamlessly integrates with downstream baselines and enhances their performance. The framework is built upon two key designs: (i) We propose a spatio-temporal mask autoencoder as a pre-training model for learning spatio-temporal dependencies. The model incorporates customized parameter learners and hierarchical spatial pattern encoding networks. These modules are specifically designed to capture spatio-temporal customized representations and intra- and inter-cluster region semantic relationships, which have often been neglected in existing approaches. (ii) We introduce an adaptive mask strategy as part of the pre-training mechanism. This strategy guides the mask autoencoder in learning robust spatio-temporal representations and facilitates the modeling of different relationships, ranging from intra-cluster to inter-cluster, in an easy-to-hard training manner. Extensive experiments conducted on representative benchmarks demonstrate the effectiveness of our proposed method. We have made our model implementation publicly available at https://github.com/HKUDS/GPT-ST. |
| Persistent Identifier | http://hdl.handle.net/10722/355963 |
| ISSN | 2020 SCImago Journal Rankings: 1.399 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Li, Zhonghang | - |
| dc.contributor.author | Xia, Lianghao | - |
| dc.contributor.author | Xu, Yong | - |
| dc.contributor.author | Huang, Chao | - |
| dc.date.accessioned | 2025-05-19T05:46:56Z | - |
| dc.date.available | 2025-05-19T05:46:56Z | - |
| dc.date.issued | 2023 | - |
| dc.identifier.citation | Advances in Neural Information Processing Systems, 2023, v. 36 | - |
| dc.identifier.issn | 1049-5258 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/355963 | - |
| dc.description.abstract | In recent years, there has been a rapid development of spatio-temporal prediction techniques in response to the increasing demands of traffic management and travel planning. While advanced end-to-end models have achieved notable success in improving predictive performance, their integration and expansion pose significant challenges. This work aims to address these challenges by introducing a spatio-temporal pre-training framework that seamlessly integrates with downstream baselines and enhances their performance. The framework is built upon two key designs: (i) We propose a spatio-temporal mask autoencoder as a pre-training model for learning spatio-temporal dependencies. The model incorporates customized parameter learners and hierarchical spatial pattern encoding networks. These modules are specifically designed to capture spatio-temporal customized representations and intra- and inter-cluster region semantic relationships, which have often been neglected in existing approaches. (ii) We introduce an adaptive mask strategy as part of the pre-training mechanism. This strategy guides the mask autoencoder in learning robust spatio-temporal representations and facilitates the modeling of different relationships, ranging from intra-cluster to inter-cluster, in an easy-to-hard training manner. Extensive experiments conducted on representative benchmarks demonstrate the effectiveness of our proposed method. We have made our model implementation publicly available at https://github.com/HKUDS/GPT-ST. | - |
| dc.language | eng | - |
| dc.relation.ispartof | Advances in Neural Information Processing Systems | - |
| dc.title | GPT-ST: Generative Pre-Training of Spatio-Temporal Graph Neural Networks | - |
| dc.type | Conference_Paper | - |
| dc.description.nature | link_to_subscribed_fulltext | - |
| dc.identifier.scopus | eid_2-s2.0-85188684279 | - |
| dc.identifier.volume | 36 | - |

