File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: MIST: A multiview and multimodal spatial-temporal learning framework for citywide abnormal event forecasting

TitleMIST: A multiview and multimodal spatial-temporal learning framework for citywide abnormal event forecasting
Authors
KeywordsAbnormal Event Forecasting
Deep Neural Networks
Spatial-temporal Data Mining
Issue Date2019
Citation
The Web Conference 2019 - Proceedings of the World Wide Web Conference, WWW 2019, 2019, p. 717-728 How to Cite?
AbstractCitywide abnormal events, such as crimes and accidents, may result in loss of lives or properties if not handled efficiently. It is important for a wide spectrum of applications, ranging from public order maintaining, disaster control and people's activity modeling, if abnormal events can be automatically predicted before they occur. However, forecasting different categories of citywide abnormal events is very challenging as it is affected by many complex factors from different views: (i) dynamic intra-region temporal correlation; (ii) complex inter-region spatial correlations; (iii) latent cross-categorical correlations. In this paper, we develop a Multi-View and Multi-Modal Spatial-Temporal learning (MiST) framework to address the above challenges by promoting the collaboration of different views (spatial, temporal and semantic) and map the multi-modal units into the same latent space. Specifically, MiST can preserve the underlying structural information of multi-view abnormal event data and automatically learn the importance of view-specific representations, with the integration of a multi-modal pattern fusion module and a hierarchical recurrent framework. Extensive experiments on three real-world datasets, i.e., crime data and urban anomaly data, demonstrate the superior performance of our MiST method over the state-of-the-art baselines across various settings.
Persistent Identifierhttp://hdl.handle.net/10722/308787
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorHuang, Chao-
dc.contributor.authorWu, Xian-
dc.contributor.authorZhang, Chuxu-
dc.contributor.authorYin, Dawei-
dc.contributor.authorZhao, Jiashu-
dc.contributor.authorChawla, Nitesh V.-
dc.date.accessioned2021-12-08T07:50:08Z-
dc.date.available2021-12-08T07:50:08Z-
dc.date.issued2019-
dc.identifier.citationThe Web Conference 2019 - Proceedings of the World Wide Web Conference, WWW 2019, 2019, p. 717-728-
dc.identifier.urihttp://hdl.handle.net/10722/308787-
dc.description.abstractCitywide abnormal events, such as crimes and accidents, may result in loss of lives or properties if not handled efficiently. It is important for a wide spectrum of applications, ranging from public order maintaining, disaster control and people's activity modeling, if abnormal events can be automatically predicted before they occur. However, forecasting different categories of citywide abnormal events is very challenging as it is affected by many complex factors from different views: (i) dynamic intra-region temporal correlation; (ii) complex inter-region spatial correlations; (iii) latent cross-categorical correlations. In this paper, we develop a Multi-View and Multi-Modal Spatial-Temporal learning (MiST) framework to address the above challenges by promoting the collaboration of different views (spatial, temporal and semantic) and map the multi-modal units into the same latent space. Specifically, MiST can preserve the underlying structural information of multi-view abnormal event data and automatically learn the importance of view-specific representations, with the integration of a multi-modal pattern fusion module and a hierarchical recurrent framework. Extensive experiments on three real-world datasets, i.e., crime data and urban anomaly data, demonstrate the superior performance of our MiST method over the state-of-the-art baselines across various settings.-
dc.languageeng-
dc.relation.ispartofThe Web Conference 2019 - Proceedings of the World Wide Web Conference, WWW 2019-
dc.subjectAbnormal Event Forecasting-
dc.subjectDeep Neural Networks-
dc.subjectSpatial-temporal Data Mining-
dc.titleMIST: A multiview and multimodal spatial-temporal learning framework for citywide abnormal event forecasting-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1145/3308558.3313730-
dc.identifier.scopuseid_2-s2.0-85066901523-
dc.identifier.spage717-
dc.identifier.epage728-
dc.identifier.isiWOS:000483508400068-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats