File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Appearance-Motion Memory Consistency Network for Video Anomaly Detection

TitleAppearance-Motion Memory Consistency Network for Video Anomaly Detection
Authors
Issue Date2021
Citation
35th AAAI Conference on Artificial Intelligence, AAAI 2021, 2021, v. 2A, p. 938-946 How to Cite?
AbstractAbnormal event detection in the surveillance video is an essential but the challenging task and many methods have been proposed to deal with this problem. The previous methods either only considers the appearance information or directly integrate the results of appearance and motion information without considering their endogenous consistency semantic explicitly. Inspired by the rule that humans identify the abnormal frames from multi-modality signals, we propose an Appearance-Motion Memory Consistency Network (AMMC-Net). Our method first makes full use of the prior knowledge of appearance and motion signals to capture the correspondence between them in the high-level feature space explicitly. Then, it combines the multi-view features to obtain a more essential and robust feature representation of regular events, which can significantly increase the gap between an abnormal and a regular event. In the anomaly detection phase, we further introduce a commit error in the latent space joint with the prediction error in pixel space to enhance the detection accuracy. Solid experimental results on various standard datasets validate the effectiveness of our approach.
Persistent Identifierhttp://hdl.handle.net/10722/345252

 

DC FieldValueLanguage
dc.contributor.authorCai, Ruichu-
dc.contributor.authorZhang, Hao-
dc.contributor.authorLiu, Wen-
dc.contributor.authorGao, Shenghua-
dc.contributor.authorHao, Zhifeng-
dc.date.accessioned2024-08-15T09:26:10Z-
dc.date.available2024-08-15T09:26:10Z-
dc.date.issued2021-
dc.identifier.citation35th AAAI Conference on Artificial Intelligence, AAAI 2021, 2021, v. 2A, p. 938-946-
dc.identifier.urihttp://hdl.handle.net/10722/345252-
dc.description.abstractAbnormal event detection in the surveillance video is an essential but the challenging task and many methods have been proposed to deal with this problem. The previous methods either only considers the appearance information or directly integrate the results of appearance and motion information without considering their endogenous consistency semantic explicitly. Inspired by the rule that humans identify the abnormal frames from multi-modality signals, we propose an Appearance-Motion Memory Consistency Network (AMMC-Net). Our method first makes full use of the prior knowledge of appearance and motion signals to capture the correspondence between them in the high-level feature space explicitly. Then, it combines the multi-view features to obtain a more essential and robust feature representation of regular events, which can significantly increase the gap between an abnormal and a regular event. In the anomaly detection phase, we further introduce a commit error in the latent space joint with the prediction error in pixel space to enhance the detection accuracy. Solid experimental results on various standard datasets validate the effectiveness of our approach.-
dc.languageeng-
dc.relation.ispartof35th AAAI Conference on Artificial Intelligence, AAAI 2021-
dc.titleAppearance-Motion Memory Consistency Network for Video Anomaly Detection-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85129985672-
dc.identifier.volume2A-
dc.identifier.spage938-
dc.identifier.epage946-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats