Brain-Computer Interface (BCI) is a rapidly developing technology that aims to support individuals suffering from various disabilities and, ultimately, improve everyday quality of life. Sensorimotor rhythm-based BCIs have demonstrated remarkable results in controlling virtual or physical external devices but they still face a number of challenges and limitations. Main challenges include multiple degrees-of-freedom control, accuracy, and robustness. In this work, we develop a multiclass BCI decoding algorithm that uses electroencephalography (EEG) source imaging, a technique that maps scalp potentials to cortical activations, to compensate for low spatial resolution of EEG. Spatial features were extracted using Common Spatial Pattern (CSP) filters in the cortical source space from a number of selected Regions of Interest (ROIs). Classification was performed through an ensemble model, based on individual ROI classification models. The evaluation was performed on the BCI Competition IV dataset 2a, which features 4 motor imagery classes from 9 participants. Our results revealed a mean accuracy increase of 5.6% with respect to the conventional application method of CSP on sensors. Neuroanatomical constraints and prior neurophysiological knowledge play an important role in developing source space-based BCI algorithms. Feature selection and classifier characteristics of our implementation will be explored to raise performance to current state-of-the-art.
Brain-Computer Interface (BCI) is emerging as a promising rehabilitation technology, that aims to establish a connection between brain activity and external devices. Recent advances in invasive BCIs have demonstrated the feasibility of performing complex motor tasks using brain signals by people with disability such as severe spinal cord injury and quadriplegia [
A variety of brain signal types and features have been used to decode user intent in noninvasive EEG-based BCIs, such as visual evoked potential (VEP), P300 response, slow cortical potentials (SCP), and sensorimotor rhythm (SMR), to name but a few [
Nonetheless, noninvasive BCIs also feature a number of limitations with regards to reliability, speed, and accuracy and have many challenges to overcome to meet both research and casual everyday use needs. Key features for the success of SMR-BCIs involve the classification accuracy, performance robustness, and asynchronous and intuitive control that requires the decoding of multiple motor imagery tasks. Control of an external complex device with multiple degrees of freedom, such as a robotic arm or an artificial limb, can be better achieved by utilizing motor imagery classes that are related to the intended end effector movement [
Moreover, intrinsic drawbacks of EEG include low signal-to-noise ratio (SNR), low spatial resolution, and imprecise and indirect measuring of brain activity mainly attributed to the volume conduction effect. This effect describes the spread of the brain’s electrical field while it is transmitted from the source space through the cerebrospinal fluid, skull, and scalp to reach the scalp surface where the electrodes lay, known as the sensor space [
In the current study, we describe the development of a BCI algorithm, aiming to decode multiple (4) MI tasks. In order to overcome the issues associated with low spatial resolution, we use source imaging and extract features in the cortical source space from selected Regions of Interest (ROIs), using Common Spatial Pattern filters. Finally, the classification is performed with an ensemble classification model that synergistically uses the classification models of selected ROIs, in order to increase classification accuracy.
The BCI Competition IV 2a dataset was used to develop and test the BCI decoding algorithm. The dataset contains recordings from 9 healthy subjects that perform 4 motor imagery tasks, right arm, left arm, feet, and tongue [
At the beginning of each trial (
Diagram of a trial and timings during a session of the BCI Competition IV 2a dataset.
Signal analysis was performed solely on the EEG electrodes, and the EOG channels were excluded. Average reference was used, and the data were band-pass filtered at 7–15 Hz using a zero-phase FIR filter in order to capture the event related desynchronization and synchronization (ERD/ERS) activity [
EEG source imaging was deployed to mitigate low spatial resolution and low SNR caused by volume conduction. EEG source imaging maps sensor activity to brain neural current distribution at fixed positions over the cortex. The source activity is defined in terms of current dipoles, at a grid of vertices on the MNI cortical surface template, that model electrical activity of neuronal groups firing synchronously [
The Montreal Neurological Institute (MNI) Colin 27 MRI generic template [
Given the lead-field matrix, the inverse EEG problem consists of finding the dipole current density D in (1). This is a highly underdetermined problem since the number of dipoles (sources) is at the order of thousands and the number of EEG channels is at most at the order of hundreds, which in practice means that different current distributions (brain activity) can lead to exact EEG sensor values. Among different methods for solving the inverse problem, here it was solved with the weighted minimum norm estimate (wMNE) method using the Brainstorm toolbox [
Cortical Regions of Interest (ROIs) were defined on the sensorimotor cortex to reduce the dimension of the source data derived from the inverse problem solution, having anatomical constraints and aiming at extracting valuable information related to MI tasks [
Regions of Interest (ROIs) at the cortical level: (a) midline surface, left hemisphere, (b) top view, both hemispheres, and (c) lateral view, right hemisphere. 1: SAC, 2: S1F, 3: S1H, 4: S2, 5: CMA, 6: M1F, 7: M1H, 8: M1L, 9: SMA, 10: pSMA, 11: PMd, 12: PMv [
Feature extraction was performed at the source level, on ROIs data in particular. Common Spatial Pattern (CSP) filters are one of the most used feature extraction methods in BCI domain [
Original CSP algorithm has been developed for two class problems, though there exist multiclass extensions [
In this work, CSP filters were applied to the source data, and they were calculated on every ROI current dipole time-series. Assuming
The feature vector of ROIq,
An ensemble classification model was used for the prediction of the MI task [
Outline of the implemented decoding algorithm. EEG sensor time series are transformed to current dipole time series. Data from the Regions of Interest (ROIs) are spatially filtered by ROI-CSP filters, to extract features to be classified by independent ROI classification models. Predicted class is the most voted class of the ROI classification models. On the classification model scheme, the predicted class is the outcome of an inference mechanism (majority vote). The inference mechanism takes as input the predicted class from the individual ROI classification models.
The defined ROIs extend all over the motor cortex, while the cortical activity related to the performed motor imagery tasks is derived only from a subset of the defined ROIs. ROIs were selected based on their classification model accuracy. In order to select the most accurate ROIs, 10-fold cross-validation using the LDA classifier was performed on ROI level, and this was repeated 10 times to ensure more robust results (in every run, CSP filters are calculated on different data). The
The performance of the classification scheme on the source space was further compared to the performance on the sensor space using the same setting (10-fold cross-validation of the LDA classification repeated 10 times). For the sensor space, the CSP filters are computed on the preprocessed EEG data. Moreover, to better assess the developed method, performance in terms of Cohen’s kappa statistic, a useful metric for multiclass prediction problems, was compared to the winner of BCI Competition IV of dataset 2a [
Four different classifiers were tested to select the classifier to make the predictions. LDA, kNN, Naive Bayesian, and Decision Tree were tested by performing 10-fold cross-validation, 10 times on all subjects. LDA had superior performance with the highest prediction accuracy among all subjects, with mean accuracy 54.1%. Naive Bayesian was second with 46.9%, followed by Decision Tree and kNN with 45.5% and 44.5%, respectively (Figure
Classification accuracy of kNN, LDA, Naïve Bayesian, and Decision Tree classifiers across all subject data.
The source method of classification achieved consistently higher accuracy rates across all subjects (43.7% to 74.5%), when compared to the sensor method (37.7% to 73.4%), as illustrated in Figure
Classification accuracy of the developed source method and the equivalent traditional sensor approach, on the BCI Competition IV, 2a dataset.
10 × 10-fold cross-validation performance in terms of mean classification accuracy (%) of the developed source method and the equivalent sensor method.
Subject | A01T | A02T | A03T | A04T | A05T | A06T | A07T | A08T | A09T | Mean |
---|---|---|---|---|---|---|---|---|---|---|
Sensor | 61.0 | 45.8 | 68.2 | 39.4 | 38.0 | 37.7 | 56.3 | 67.3 | 73.4 | 54.1 |
Source | 62.4 | 51.3 | 70.9 | 46.3 | 47.6 | 43.7 | 67.2 | 71.9 | 74.5 | 59.7 |
10 × 10-fold cross-validation performance in terms of mean Cohen’s kappa value, of the developed method in source and sensor level, and the method developed by the winner of BCI Competition IV, dataset 2a.
Subject | A01T | A02T | A03T | A04T | A05T | A06T | A07T | A08T | A09T | Mean |
---|---|---|---|---|---|---|---|---|---|---|
Sensor | 0.48 | 0.27 | 0.57 | 0.19 | 0.17 | 0.17 | 0.42 | 0.56 | 0.64 | 0.39 |
Source | 0.50 | 0.34 | 0.61 | 0.30 | 0.30 | 0.26 | 0.56 | 0.63 | 0.66 | 0.46 |
Winner FBCSP | 0.76 | 0.47 | 0.83 | 0.48 | 0.60 | 0.34 | 0.86 | 0.80 | 0.78 | 0.65 |
Classification sensitivity and specificity, also referred to as true positive and true negative rate respectively, between the source and sensor method are demonstrated in Tables
Sensor method classification sensitivity (true positive rate).
Sensitivity | Left (%) | Right (%) | Foot (%) | Tongue (%) |
---|---|---|---|---|
A01T | 49.03 | 70.00 | 47.92 | 75.56 |
A02T | 38.75 | 41.11 | 61.81 | 42.22 |
A03T | 80.14 | 80.83 | 50.14 | 60.83 |
A04T | 34.86 | 37.64 | 37.92 | 46.53 |
A05T | 43.33 | 48.33 | 25.83 | 32.08 |
A06T | 35.42 | 38.33 | 50.56 | 27.64 |
A07T | 68.61 | 57.08 | 44.03 | 56.25 |
A08T | 73.47 | 61.67 | 65.28 | 69.31 |
A09T | 81.67 | 68.89 | 67.64 | 78.47 |
Mean | 56.14 | 55.99 | 50.12 | 54.32 |
Sensor method classification specificity (true negative rate).
Specificity | Left (%) | Right (%) | Foot (%) | Tongue (%) |
---|---|---|---|---|
A01T | 85.28 | 84.86 | 88.33 | 89.03 |
A02T | 79.63 | 80.83 | 87.64 | 79.86 |
A03T | 91.67 | 93.47 | 86.76 | 85.42 |
A04T | 78.94 | 77.31 | 82.59 | 80.14 |
A05T | 79.40 | 80.83 | 79.03 | 77.27 |
A06T | 80.93 | 77.96 | 79.91 | 78.52 |
A07T | 87.04 | 85.14 | 82.50 | 87.31 |
A08T | 94.21 | 87.55 | 83.98 | 90.83 |
A09T | 93.47 | 92.78 | 86.67 | 92.64 |
Mean | 85.62 | 84.53 | 84.16 | 84.56 |
Source method classification sensitivity (true positive rate).
Sensitivity | Left (%) | Right (%) | Foot (%) | Tongue (%) |
---|---|---|---|---|
A01T | 55.97 | 70.56 | 52.64 | 70.69 |
A02T | 49.72 | 38.33 | 67.08 | 50.14 |
A03T | 81.25 | 81.39 | 57.92 | 63.06 |
A04T | 49.44 | 38.61 | 48.47 | 46.53 |
A05T | 71.39 | 59.17 | 25.00 | 37.92 |
A06T | 47.36 | 44.58 | 59.03 | 24.44 |
A07T | 88.47 | 72.50 | 45.56 | 60.56 |
A08T | 77.22 | 75.14 | 60.14 | 76.11 |
A09T | 84.58 | 70.28 | 65.00 | 76.94 |
Mean | 67.27 | 61.17 | 53.43 | 56.27 |
Source method classification specificity (true negative rate).
Specificity | Left (%) | Right (%) | Foot (%) | Tongue (%) |
---|---|---|---|---|
A01T | 86.20 | 85.88 | 88.80 | 89.07 |
A02T | 76.90 | 84.07 | 88.33 | 85.79 |
A03T | 89.63 | 93.29 | 89.63 | 88.66 |
A04T | 75.46 | 84.07 | 84.40 | 83.75 |
A05T | 79.12 | 80.37 | 86.76 | 84.91 |
A06T | 78.15 | 78.75 | 82.08 | 86.16 |
A07T | 88.15 | 89.17 | 87.69 | 90.69 |
A08T | 93.70 | 90.51 | 86.99 | 91.67 |
A09T | 92.27 | 91.85 | 88.52 | 92.96 |
Mean | 84.40 | 86.44 | 87.02 | 88.18 |
The ROI selection procedure was performed for all the subjects, exhibiting interesting intersubject properties. As illustrated in Figure
Histogram of the selected regions of interest (ROI), across all subjects.
Positions of the most commonly selected ROIs among subjects (right and left M1L, M1H, S1H, and CMA), displayed on the cortical model.
Noninvasive BCI systems emerge as a promising and safe solution for rehabilitation purposes in contrast with invasive BCIs that are associated to health risks and ethical issues [
Despite the fact that our algorithm did not reach the accuracy levels of the wining method of the BCI Competition, during the ROI selection procedure, common ROIs emerged among all subjects. The emerged ROIs are anatomically and neurophysiologically related to the MI tasks, linking the method results with neurological data. Given that the motor tasks of the competition involved motor imagery of both arms, tongue, and feet, consistent selection of primary hand motor areas and primary lip motor area (cortical representations of hands and face on the primary motor cortex) seems very promising. Cingulate motor areas are also considered very important nodes of the sensorimotor network, having been demonstrated to drive the sensorimotor process [
Our method appeared to improve mean accuracy by 5.6% and by 0.07 Cohen’s kappa value among all subjects, with respect to sensor method. When our algorithm is compared with the winner of the BCI Competition (FBCSP), the mean accuracy is considerably lower by 0.19 Cohen's kappa value. The performance of our algorithm based on kappa value is considered moderate while that of the winning implementation is considered substantial [
In this study, a generic template three-compartment BEM head model was utilized to solve the forward problem. Forward problem solution induces an important error in the source estimation, as has been explored extensively in previous studies [
There are two main shortcomings in the use of CSP method that were not dealt in this work. The first is that the CSP filters are prone to noise and overfitting, and the second is that the CSP performance is highly dependent on the input signal frequency band the individual subject BCI performance is dependable on individual frequency band used [
Future work will focus on better CSP filters extraction and use feature selection and more sophisticated ensemble models, in an effort to increase the performance of the algorithm. Since the anatomy used for the forward model is common among all subjects and the selected ROIs are common among all subjects, we would like to check the potential of the algorithm in transfer learning between subjects. There is a study supporting that transfer learning between different subjects by means of source space can achieve higher average single-trial classification accuracy than with a conventional method [
Source estimation and application of CSP filters at the source space constitute a promising solution to increasing classification accuracy of noninvasive BCIs. Our method has demonstrated capability in decoding multiple motor imagery tasks with better accuracy than the equivalent sensor method. While our implementation still is not superior to the state of the art of BCI algorithms, feature selection and classifier characteristics can improve performance. Neuroanatomical constraints and prior neurophysiological knowledge has been shown to play an important role in developing source space-based BCI algorithms. Our results indicate that the selected ROIs are common among all subjects, which worth further investigation probably in the context of transfer learning between different subjects.
The BCI Competition IV dataset is available at
This study describes a novel analysis of a publicly available dataset. It does not describe new experiments on human subjects.
The authors declare that there are no conflicts of interest regarding the publication of this article.
This study was conducted as part of the development of the project CSI:Brainwave and was supported by the “Brihaye” 2018 EANS Research Grant. This study was supported by the European Union’s Horizon 2020 Research and Innovation Program under Grant Agreement no. 681120 for the SmokeFreeBrain Project.