File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Zero-effort cross-domain gesture recognition with Wi-Fi

TitleZero-effort cross-domain gesture recognition with Wi-Fi
Authors
KeywordsCOTS Wi-Fi
Gesture Recognition
Channel State Information
Issue Date2019
Citation
MobiSys 2019 - Proceedings of the 17th Annual International Conference on Mobile Systems, Applications, and Services, 2019, p. 313-325 How to Cite?
AbstractWi-Fi based sensing systems, although sound as being deployed almost everywhere there is Wi-Fi, are still practically difficult to be used without explicit adaptation efforts to new data domains. Various pioneering approaches have been proposed to resolve this contradiction by either translating features between domains or generating domain-independent features at a higher learning level. Still, extra training efforts are necessary in either data collection or model re-training when new data domains appear, limiting their practical usability. To advance cross-domain sensing and achieve fully zero-effort sensing, a domain-independent feature at the lower signal level acts as a key enabler. In this paper, we propose Widar3.0, a Wi-Fi based zero-effort cross-domain gesture recognition system. The key insight of Widar3.0 is to derive and estimate velocity profiles of gestures at the lower signal level, which represent unique kinetic characteristics of gestures and are irrespective of domains. On this basis, we develop a one-fits-all model that requires only one-time training but can adapt to different data domains. We implement this design and conduct comprehensive experiments. The evaluation results show that without re-training and across various domain factors (i.e. environments, locations and orientations of persons), Widar3.0 achieves 92.7% in-domain recognition accuracy and 82.6%-92.4% cross-domain recognition accuracy, outperforming the state-of-the-art solutions. To the best of our knowledge, Widar3.0 is the first zero-effort cross-domain gesture recognition work via Wi-Fi, a fundamental step towards ubiquitous sensing.
Persistent Identifierhttp://hdl.handle.net/10722/303615

 

DC FieldValueLanguage
dc.contributor.authorZheng, Yue-
dc.contributor.authorZhang, Yi-
dc.contributor.authorQian, Kun-
dc.contributor.authorZhang, Guidong-
dc.contributor.authorLiu, Yunhao-
dc.contributor.authorWu, Chenshu-
dc.contributor.authorYang, Zheng-
dc.date.accessioned2021-09-15T08:25:40Z-
dc.date.available2021-09-15T08:25:40Z-
dc.date.issued2019-
dc.identifier.citationMobiSys 2019 - Proceedings of the 17th Annual International Conference on Mobile Systems, Applications, and Services, 2019, p. 313-325-
dc.identifier.urihttp://hdl.handle.net/10722/303615-
dc.description.abstractWi-Fi based sensing systems, although sound as being deployed almost everywhere there is Wi-Fi, are still practically difficult to be used without explicit adaptation efforts to new data domains. Various pioneering approaches have been proposed to resolve this contradiction by either translating features between domains or generating domain-independent features at a higher learning level. Still, extra training efforts are necessary in either data collection or model re-training when new data domains appear, limiting their practical usability. To advance cross-domain sensing and achieve fully zero-effort sensing, a domain-independent feature at the lower signal level acts as a key enabler. In this paper, we propose Widar3.0, a Wi-Fi based zero-effort cross-domain gesture recognition system. The key insight of Widar3.0 is to derive and estimate velocity profiles of gestures at the lower signal level, which represent unique kinetic characteristics of gestures and are irrespective of domains. On this basis, we develop a one-fits-all model that requires only one-time training but can adapt to different data domains. We implement this design and conduct comprehensive experiments. The evaluation results show that without re-training and across various domain factors (i.e. environments, locations and orientations of persons), Widar3.0 achieves 92.7% in-domain recognition accuracy and 82.6%-92.4% cross-domain recognition accuracy, outperforming the state-of-the-art solutions. To the best of our knowledge, Widar3.0 is the first zero-effort cross-domain gesture recognition work via Wi-Fi, a fundamental step towards ubiquitous sensing.-
dc.languageeng-
dc.relation.ispartofMobiSys 2019 - Proceedings of the 17th Annual International Conference on Mobile Systems, Applications, and Services-
dc.subjectCOTS Wi-Fi-
dc.subjectGesture Recognition-
dc.subjectChannel State Information-
dc.titleZero-effort cross-domain gesture recognition with Wi-Fi-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1145/3307334.3326081-
dc.identifier.scopuseid_2-s2.0-85069182201-
dc.identifier.spage313-
dc.identifier.epage325-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats