File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Transferring Subject-Specific Knowledge Across Stimulus Frequencies in SSVEP-Based BCIs

TitleTransferring Subject-Specific Knowledge Across Stimulus Frequencies in SSVEP-Based BCIs
Authors
KeywordsBrain–computer interface (BCI)
steady-state visually evoked potential (SSVEP)
stimulus-to-stimulus transfer
transfer learning
Issue Date2021
PublisherInstitute of Electrical and Electronics Engineers. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=8856
Citation
IEEE Transactions on Automation Science and Engineering, 2021, v. 18 n. 2, p. 552-563 How to Cite?
AbstractLearning from subject's calibration data can significantly improve the performance of a steady-state visually evoked potential (SSVEP)-based brain-computer interface (BCI), for example, the state-of-the-art target recognition methods utilize the learned subject-specific and stimulus-specific model parameters. Unfortunately, when dealing with new stimuli or new subjects, new calibration data must be acquired, thus requiring laborious calibration sessions, which becomes a major challenge in developing high-performance BCIs for real-life applications. This study investigates the feasibility of transferring the model parameters (i.e., the spatial filters and the SSVEP templates) across two different groups of visual stimuli in SSVEP-based BCIs. According to our exploration, we can extract a common spatial filter from the spatial filters across different stimulus frequencies and a common impulse response from the SSVEP templates across different neighboring stimulus frequencies, in which the common spatial filter is considered as the transferred spatial filter and the common impulse response is utilized to reconstruct the transferred SSVEP template according to the theory that an SSVEP is a superposition of the impulse responses. Then, we develop a transfer learning canonical correlation analysis (tlCCA) incorporating the transferred model parameters. For evaluation, we compare the recognition performance of the calibration-free, the calibration-based, and the proposed tlCCA on an SSVEP data set with 60 subjects. Experiment results prove that the spatial filters share commonality across different frequencies and the impulse responses share commonality across neighboring frequencies. More importantly, the tlCCA performs significantly better than the calibration-free algorithms, comparably to the calibration-based algorithm. Note to Practitioners-This work is motivated by the long calibration time problem in using an steady-state visually evoked potential (SSVEP)-based brain-computer interface (BCI) because most state-of-the-art frequency recognition methods consider merely the situation that the calibration data and the test data are from the same subject and the same visual stimulus. This article assumes that the model parameters share the stimulus-nonspecific knowledge in a limited stimulus frequency range, and thus, the subject's old calibration data can be reused to learn new model parameters for new visual stimuli. First, the model parameters can be decomposed into the stimulus-nonspecific knowledge (or subject-specific knowledge) and stimulus-specific knowledge. Second, the new model parameters can be generated via transferring the knowledge across stimulus frequencies. Then, a new recognition algorithm is developed using the transferred model parameters. Experiment results validate the assumptions, and moreover, the proposed scheme could be extended to other scenarios, such as when facing new subjects, or adopting new signal acquisition equipment, which would be helpful to the future development of zero-calibration SSVEP-based BCIs for real-life healthcare applications.
Persistent Identifierhttp://hdl.handle.net/10722/305410
ISSN
2021 Impact Factor: 6.636
2020 SCImago Journal Rankings: 1.314
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorWONG, CM-
dc.contributor.authorWANG, Z-
dc.contributor.authorROSA, AC-
dc.contributor.authorCHEN, CLP-
dc.contributor.authorJUNG, TP-
dc.contributor.authorHu, Y-
dc.contributor.authorWAN, F-
dc.date.accessioned2021-10-20T10:09:00Z-
dc.date.available2021-10-20T10:09:00Z-
dc.date.issued2021-
dc.identifier.citationIEEE Transactions on Automation Science and Engineering, 2021, v. 18 n. 2, p. 552-563-
dc.identifier.issn1545-5955-
dc.identifier.urihttp://hdl.handle.net/10722/305410-
dc.description.abstractLearning from subject's calibration data can significantly improve the performance of a steady-state visually evoked potential (SSVEP)-based brain-computer interface (BCI), for example, the state-of-the-art target recognition methods utilize the learned subject-specific and stimulus-specific model parameters. Unfortunately, when dealing with new stimuli or new subjects, new calibration data must be acquired, thus requiring laborious calibration sessions, which becomes a major challenge in developing high-performance BCIs for real-life applications. This study investigates the feasibility of transferring the model parameters (i.e., the spatial filters and the SSVEP templates) across two different groups of visual stimuli in SSVEP-based BCIs. According to our exploration, we can extract a common spatial filter from the spatial filters across different stimulus frequencies and a common impulse response from the SSVEP templates across different neighboring stimulus frequencies, in which the common spatial filter is considered as the transferred spatial filter and the common impulse response is utilized to reconstruct the transferred SSVEP template according to the theory that an SSVEP is a superposition of the impulse responses. Then, we develop a transfer learning canonical correlation analysis (tlCCA) incorporating the transferred model parameters. For evaluation, we compare the recognition performance of the calibration-free, the calibration-based, and the proposed tlCCA on an SSVEP data set with 60 subjects. Experiment results prove that the spatial filters share commonality across different frequencies and the impulse responses share commonality across neighboring frequencies. More importantly, the tlCCA performs significantly better than the calibration-free algorithms, comparably to the calibration-based algorithm. Note to Practitioners-This work is motivated by the long calibration time problem in using an steady-state visually evoked potential (SSVEP)-based brain-computer interface (BCI) because most state-of-the-art frequency recognition methods consider merely the situation that the calibration data and the test data are from the same subject and the same visual stimulus. This article assumes that the model parameters share the stimulus-nonspecific knowledge in a limited stimulus frequency range, and thus, the subject's old calibration data can be reused to learn new model parameters for new visual stimuli. First, the model parameters can be decomposed into the stimulus-nonspecific knowledge (or subject-specific knowledge) and stimulus-specific knowledge. Second, the new model parameters can be generated via transferring the knowledge across stimulus frequencies. Then, a new recognition algorithm is developed using the transferred model parameters. Experiment results validate the assumptions, and moreover, the proposed scheme could be extended to other scenarios, such as when facing new subjects, or adopting new signal acquisition equipment, which would be helpful to the future development of zero-calibration SSVEP-based BCIs for real-life healthcare applications.-
dc.languageeng-
dc.publisherInstitute of Electrical and Electronics Engineers. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=8856-
dc.relation.ispartofIEEE Transactions on Automation Science and Engineering-
dc.rightsIEEE Transactions on Automation Science and Engineering. Copyright © Institute of Electrical and Electronics Engineers.-
dc.rights©20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.-
dc.subjectBrain–computer interface (BCI)-
dc.subjectsteady-state visually evoked potential (SSVEP)-
dc.subjectstimulus-to-stimulus transfer-
dc.subjecttransfer learning-
dc.titleTransferring Subject-Specific Knowledge Across Stimulus Frequencies in SSVEP-Based BCIs-
dc.typeArticle-
dc.identifier.emailHu, Y: yhud@hku.hk-
dc.identifier.authorityHu, Y=rp00432-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TASE.2021.3054741-
dc.identifier.scopuseid_2-s2.0-85101471480-
dc.identifier.hkuros328178-
dc.identifier.volume18-
dc.identifier.issue2-
dc.identifier.spage552-
dc.identifier.epage563-
dc.identifier.isiWOS:000638401500016-
dc.publisher.placeUnited States-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats