Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Development of a ternary hybrid fNIRS-EEG brain–computer interface based on imagined speech

Development of a ternary hybrid fNIRS-EEG brain–computer interface based on imagined speech There is increasing interest in developing intuitive brain-computer interfaces (BCIs) to differentiate intuitive mental tasks such as imagined speech. Both electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) have been used for this purpose. However, the classification accuracy and number of commands in such BCIs have been limited. The use of multi-modal BCIs to address these issues has been proposed for some common BCI tasks, but not for imagined speech. Here, we propose a multi-class hybrid fNIRS-EEG BCI based on imagined speech. Eleven participants performed multiple iterations of three tasks: mentally repeating ‘yes’ or ‘no’ for 15 s or an equivalent duration of unconstrained rest. We achieved an average ternary classification accuracy of 70.45 ± 19.19% which is significantly better than that attained with each modality alone (p < 0.05). Our findings suggest that concurrent measurements of EEG and fNIRS can improve classification accuracy of BCIs based on imagined speech. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Brain-Computer Interfaces Taylor & Francis

Development of a ternary hybrid fNIRS-EEG brain–computer interface based on imagined speech

Development of a ternary hybrid fNIRS-EEG brain–computer interface based on imagined speech

Brain-Computer Interfaces , Volume 6 (4): 13 – Oct 2, 2019

Abstract

There is increasing interest in developing intuitive brain-computer interfaces (BCIs) to differentiate intuitive mental tasks such as imagined speech. Both electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) have been used for this purpose. However, the classification accuracy and number of commands in such BCIs have been limited. The use of multi-modal BCIs to address these issues has been proposed for some common BCI tasks, but not for imagined speech. Here, we propose a multi-class hybrid fNIRS-EEG BCI based on imagined speech. Eleven participants performed multiple iterations of three tasks: mentally repeating ‘yes’ or ‘no’ for 15 s or an equivalent duration of unconstrained rest. We achieved an average ternary classification accuracy of 70.45 ± 19.19% which is significantly better than that attained with each modality alone (p < 0.05). Our findings suggest that concurrent measurements of EEG and fNIRS can improve classification accuracy of BCIs based on imagined speech.

Loading next page...
 
/lp/taylor-francis/development-of-a-ternary-hybrid-fnirs-eeg-brain-computer-interface-0mO5Azihpo

References (50)

Publisher
Taylor & Francis
Copyright
© 2019 Informa UK Limited, trading as Taylor & Francis Group
ISSN
2326-2621
eISSN
2326-263x
DOI
10.1080/2326263X.2019.1698928
Publisher site
See Article on Publisher Site

Abstract

There is increasing interest in developing intuitive brain-computer interfaces (BCIs) to differentiate intuitive mental tasks such as imagined speech. Both electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) have been used for this purpose. However, the classification accuracy and number of commands in such BCIs have been limited. The use of multi-modal BCIs to address these issues has been proposed for some common BCI tasks, but not for imagined speech. Here, we propose a multi-class hybrid fNIRS-EEG BCI based on imagined speech. Eleven participants performed multiple iterations of three tasks: mentally repeating ‘yes’ or ‘no’ for 15 s or an equivalent duration of unconstrained rest. We achieved an average ternary classification accuracy of 70.45 ± 19.19% which is significantly better than that attained with each modality alone (p < 0.05). Our findings suggest that concurrent measurements of EEG and fNIRS can improve classification accuracy of BCIs based on imagined speech.

Journal

Brain-Computer InterfacesTaylor & Francis

Published: Oct 2, 2019

Keywords: Brain-computer interface; imagined speech; hybrid BCI; fNIRS; EEG; regularized linear discriminant analysis (RLDA); discrete wavelet transform (DWT)

There are no references for this article.