File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: PhysFormer plus plus : Facial Video-Based Physiological Measurement with SlowFast Temporal Difference Transformer

TitlePhysFormer plus plus : Facial Video-Based Physiological Measurement with SlowFast Temporal Difference Transformer
Authors
KeywordsCross-attention
Periodic-attention
RPPG
SlowFast
Temporal difference transformer
Issue Date15-Feb-2023
PublisherSpringer
Citation
International Journal of Computer Vision, 2023, v. 131, n. 6, p. 1307-1330 How to Cite?
AbstractRemote photoplethysmography (rPPG), which aims at measuring heart activities and physiological signals from facial video without any contact, has great potential in many applications (e.g., remote healthcare and affective computing). Recent deep learning approaches focus on mining subtle rPPG clues using convolutional neural networks with limited spatio-temporal receptive fields, which neglect the long-range spatio-temporal perception and interaction for rPPG modeling. In this paper, we propose two end-to-end video transformer based architectures, namely PhysFormer and PhysFormer++, to adaptively aggregate both local and global spatio-temporal features for rPPG representation enhancement. As key modules in PhysFormer, the temporal difference transformers first enhance the quasi-periodic rPPG features with temporal difference guided global attention, and then refine the local spatio-temporal representation against interference. To better exploit the temporal contextual and periodic rPPG clues, we also extend the PhysFormer to the two-pathway SlowFast based PhysFormer++ with temporal difference periodic and cross-attention transformers. Furthermore, we propose the label distribution learning and a curriculum learning inspired dynamic constraint in frequency domain, which provide elaborate supervisions for PhysFormer and PhysFormer++ and alleviate overfitting. Comprehensive experiments are performed on four benchmark datasets to show our superior performance on both intra- and cross-dataset testings. Unlike most transformer networks needed pretraining from large-scale datasets, the proposed PhysFormer family can be easily trained from scratch on rPPG datasets, which makes it promising as a novel transformer baseline for the rPPG community.
Persistent Identifierhttp://hdl.handle.net/10722/331841
ISSN
2023 Impact Factor: 11.6
2023 SCImago Journal Rankings: 6.668
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorYu, ZT-
dc.contributor.authorShen, YM-
dc.contributor.authorShi, JA-
dc.contributor.authorZhao, HS-
dc.contributor.authorCui, YW-
dc.contributor.authorZhang, JH-
dc.contributor.authorTorr, P-
dc.contributor.authorZhao, GY-
dc.date.accessioned2023-09-21T06:59:23Z-
dc.date.available2023-09-21T06:59:23Z-
dc.date.issued2023-02-15-
dc.identifier.citationInternational Journal of Computer Vision, 2023, v. 131, n. 6, p. 1307-1330-
dc.identifier.issn0920-5691-
dc.identifier.urihttp://hdl.handle.net/10722/331841-
dc.description.abstractRemote photoplethysmography (rPPG), which aims at measuring heart activities and physiological signals from facial video without any contact, has great potential in many applications (e.g., remote healthcare and affective computing). Recent deep learning approaches focus on mining subtle rPPG clues using convolutional neural networks with limited spatio-temporal receptive fields, which neglect the long-range spatio-temporal perception and interaction for rPPG modeling. In this paper, we propose two end-to-end video transformer based architectures, namely PhysFormer and PhysFormer++, to adaptively aggregate both local and global spatio-temporal features for rPPG representation enhancement. As key modules in PhysFormer, the temporal difference transformers first enhance the quasi-periodic rPPG features with temporal difference guided global attention, and then refine the local spatio-temporal representation against interference. To better exploit the temporal contextual and periodic rPPG clues, we also extend the PhysFormer to the two-pathway SlowFast based PhysFormer++ with temporal difference periodic and cross-attention transformers. Furthermore, we propose the label distribution learning and a curriculum learning inspired dynamic constraint in frequency domain, which provide elaborate supervisions for PhysFormer and PhysFormer++ and alleviate overfitting. Comprehensive experiments are performed on four benchmark datasets to show our superior performance on both intra- and cross-dataset testings. Unlike most transformer networks needed pretraining from large-scale datasets, the proposed PhysFormer family can be easily trained from scratch on rPPG datasets, which makes it promising as a novel transformer baseline for the rPPG community.-
dc.languageeng-
dc.publisherSpringer-
dc.relation.ispartofInternational Journal of Computer Vision-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectCross-attention-
dc.subjectPeriodic-attention-
dc.subjectRPPG-
dc.subjectSlowFast-
dc.subjectTemporal difference transformer-
dc.titlePhysFormer plus plus : Facial Video-Based Physiological Measurement with SlowFast Temporal Difference Transformer-
dc.typeArticle-
dc.identifier.doi10.1007/s11263-023-01758-1-
dc.identifier.scopuseid_2-s2.0-85148065784-
dc.identifier.volume131-
dc.identifier.issue6-
dc.identifier.spage1307-
dc.identifier.epage1330-
dc.identifier.eissn1573-1405-
dc.identifier.isiWOS:000933348800002-
dc.publisher.placeDORDRECHT-
dc.identifier.issnl0920-5691-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats