File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.23919/DATE51398.2021.9474206
- Scopus: eid_2-s2.0-85111061102
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: A Video-based Fall Detection Network by Spatio-temporal Joint-point Model on Edge Devices
Title | A Video-based Fall Detection Network by Spatio-temporal Joint-point Model on Edge Devices |
---|---|
Authors | |
Keywords | fall detection pose estimation spatio-temporal model joint-point features |
Issue Date | 2021 |
Publisher | IEEE Computer Society. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1000198 |
Citation | The 24th Design, Automation and Test in Europe Conference and Exhibition (DATE 2021), Virtual Conference, Grenoble, France, 1-5 February 2021, p. 422-427 How to Cite? |
Abstract | Tripping or falling is among the top threats in elderly healthcare, and the development of automatic fall detection systems are of considerable importance. With the fast development of the Internet of Things (IoT), camera vision-based solutions have drawn much attention in recent years. The traditional fall video analysis on the cloud has significant communication overhead. This work introduces a fast and lightweight video fall detection network based on a spatio-temporal joint-point model to overcome these hurdles. Instead of detecting falling motion by the traditional Convolutional Neural Networks (CNNs), we propose a Long Short-Term Memory (LSTM) model based on time-series joint-point features, extracted from a pose extractor and then filtered from a geometric joint-point filter. Experiments are conducted to verify the proposed framework, which shows a high sensitivity of 98.46% on Multiple Cameras Fall Dataset and 100% on UR Fall Dataset. Furthermore, our model can achieve pose estimation tasks simultaneously, attaining 73.3 mAP in the COCO keypoint challenge dataset, which outperforms the OpenPose work by 8%. |
Persistent Identifier | http://hdl.handle.net/10722/301978 |
ISSN |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Guan, Z | - |
dc.contributor.author | Li, S | - |
dc.contributor.author | Cheng, Y | - |
dc.contributor.author | Man, C | - |
dc.contributor.author | Mao, W | - |
dc.contributor.author | Wong, N | - |
dc.contributor.author | Yu, H | - |
dc.date.accessioned | 2021-08-21T03:29:46Z | - |
dc.date.available | 2021-08-21T03:29:46Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | The 24th Design, Automation and Test in Europe Conference and Exhibition (DATE 2021), Virtual Conference, Grenoble, France, 1-5 February 2021, p. 422-427 | - |
dc.identifier.issn | 1530-1591 | - |
dc.identifier.uri | http://hdl.handle.net/10722/301978 | - |
dc.description.abstract | Tripping or falling is among the top threats in elderly healthcare, and the development of automatic fall detection systems are of considerable importance. With the fast development of the Internet of Things (IoT), camera vision-based solutions have drawn much attention in recent years. The traditional fall video analysis on the cloud has significant communication overhead. This work introduces a fast and lightweight video fall detection network based on a spatio-temporal joint-point model to overcome these hurdles. Instead of detecting falling motion by the traditional Convolutional Neural Networks (CNNs), we propose a Long Short-Term Memory (LSTM) model based on time-series joint-point features, extracted from a pose extractor and then filtered from a geometric joint-point filter. Experiments are conducted to verify the proposed framework, which shows a high sensitivity of 98.46% on Multiple Cameras Fall Dataset and 100% on UR Fall Dataset. Furthermore, our model can achieve pose estimation tasks simultaneously, attaining 73.3 mAP in the COCO keypoint challenge dataset, which outperforms the OpenPose work by 8%. | - |
dc.language | eng | - |
dc.publisher | IEEE Computer Society. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1000198 | - |
dc.relation.ispartof | Design, Automation, and Test in Europe Conference and Exhibition Proceedings | - |
dc.rights | Design, Automation, and Test in Europe Conference and Exhibition Proceedings. Copyright © IEEE Computer Society. | - |
dc.rights | ©2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | - |
dc.subject | fall detection | - |
dc.subject | pose estimation | - |
dc.subject | spatio-temporal model | - |
dc.subject | joint-point features | - |
dc.title | A Video-based Fall Detection Network by Spatio-temporal Joint-point Model on Edge Devices | - |
dc.type | Conference_Paper | - |
dc.identifier.email | Wong, N: nwong@eee.hku.hk | - |
dc.identifier.authority | Wong, N=rp00190 | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.23919/DATE51398.2021.9474206 | - |
dc.identifier.scopus | eid_2-s2.0-85111061102 | - |
dc.identifier.hkuros | 324501 | - |
dc.identifier.spage | 422 | - |
dc.identifier.epage | 427 | - |
dc.publisher.place | United States | - |