File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1145/3308558.3313730
- Scopus: eid_2-s2.0-85066901523
- WOS: WOS:000483508400068
Supplementary
- Citations:
- Appears in Collections:
Conference Paper: MIST: A multiview and multimodal spatial-temporal learning framework for citywide abnormal event forecasting
Title | MIST: A multiview and multimodal spatial-temporal learning framework for citywide abnormal event forecasting |
---|---|
Authors | |
Keywords | Abnormal Event Forecasting Deep Neural Networks Spatial-temporal Data Mining |
Issue Date | 2019 |
Citation | The Web Conference 2019 - Proceedings of the World Wide Web Conference, WWW 2019, 2019, p. 717-728 How to Cite? |
Abstract | Citywide abnormal events, such as crimes and accidents, may result in loss of lives or properties if not handled efficiently. It is important for a wide spectrum of applications, ranging from public order maintaining, disaster control and people's activity modeling, if abnormal events can be automatically predicted before they occur. However, forecasting different categories of citywide abnormal events is very challenging as it is affected by many complex factors from different views: (i) dynamic intra-region temporal correlation; (ii) complex inter-region spatial correlations; (iii) latent cross-categorical correlations. In this paper, we develop a Multi-View and Multi-Modal Spatial-Temporal learning (MiST) framework to address the above challenges by promoting the collaboration of different views (spatial, temporal and semantic) and map the multi-modal units into the same latent space. Specifically, MiST can preserve the underlying structural information of multi-view abnormal event data and automatically learn the importance of view-specific representations, with the integration of a multi-modal pattern fusion module and a hierarchical recurrent framework. Extensive experiments on three real-world datasets, i.e., crime data and urban anomaly data, demonstrate the superior performance of our MiST method over the state-of-the-art baselines across various settings. |
Persistent Identifier | http://hdl.handle.net/10722/308787 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Huang, Chao | - |
dc.contributor.author | Wu, Xian | - |
dc.contributor.author | Zhang, Chuxu | - |
dc.contributor.author | Yin, Dawei | - |
dc.contributor.author | Zhao, Jiashu | - |
dc.contributor.author | Chawla, Nitesh V. | - |
dc.date.accessioned | 2021-12-08T07:50:08Z | - |
dc.date.available | 2021-12-08T07:50:08Z | - |
dc.date.issued | 2019 | - |
dc.identifier.citation | The Web Conference 2019 - Proceedings of the World Wide Web Conference, WWW 2019, 2019, p. 717-728 | - |
dc.identifier.uri | http://hdl.handle.net/10722/308787 | - |
dc.description.abstract | Citywide abnormal events, such as crimes and accidents, may result in loss of lives or properties if not handled efficiently. It is important for a wide spectrum of applications, ranging from public order maintaining, disaster control and people's activity modeling, if abnormal events can be automatically predicted before they occur. However, forecasting different categories of citywide abnormal events is very challenging as it is affected by many complex factors from different views: (i) dynamic intra-region temporal correlation; (ii) complex inter-region spatial correlations; (iii) latent cross-categorical correlations. In this paper, we develop a Multi-View and Multi-Modal Spatial-Temporal learning (MiST) framework to address the above challenges by promoting the collaboration of different views (spatial, temporal and semantic) and map the multi-modal units into the same latent space. Specifically, MiST can preserve the underlying structural information of multi-view abnormal event data and automatically learn the importance of view-specific representations, with the integration of a multi-modal pattern fusion module and a hierarchical recurrent framework. Extensive experiments on three real-world datasets, i.e., crime data and urban anomaly data, demonstrate the superior performance of our MiST method over the state-of-the-art baselines across various settings. | - |
dc.language | eng | - |
dc.relation.ispartof | The Web Conference 2019 - Proceedings of the World Wide Web Conference, WWW 2019 | - |
dc.subject | Abnormal Event Forecasting | - |
dc.subject | Deep Neural Networks | - |
dc.subject | Spatial-temporal Data Mining | - |
dc.title | MIST: A multiview and multimodal spatial-temporal learning framework for citywide abnormal event forecasting | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1145/3308558.3313730 | - |
dc.identifier.scopus | eid_2-s2.0-85066901523 | - |
dc.identifier.spage | 717 | - |
dc.identifier.epage | 728 | - |
dc.identifier.isi | WOS:000483508400068 | - |