Electroencephalogram signals and the states of subjects are nonstationary. To track changing states effectively, an adaptive calibration framework is proposed for the brain-computer interface (BCI) with the motion-onset visual evoked potential (mVEP) as the control signal. The core of this framework is to update the training set adaptively for classifier training. The updating procedure consists of two operations, that is, adding new samples to the training set and removing old samples from the training set. In the proposed framework, a support vector machine (SVM) and fuzzy
A brain-computer interface (BCI) provides an alternative communication and control channel between humans and the environment or devices by noninvasive [
For a BCI system, we must collect a sufficient training dataset to train the classifier to implement online tasks. This procedure may be laborious and time consuming. To address this issue, a zero-training strategy and an automatic adapting mechanism have been explored [
The support vector machine (SVM) [
In this paper, we propose an adaptive online calibration framework that was first used in mVEP-BCI system to calibrate the classifier that could track the changing states of the subjects. To fulfill this goal, the framework needs to adopt the new information in the latest samples and remove the information represented by the old samples, which were recorded a relatively long time previously. We combine SVM and fCM to select the reliable samples from the previous blocks and then clip the expanded training set to remove the old information represented by the old samples. With these operations, an updated training set could be generated and subsequently fed into the classifier for the retraining to track the subject’s states. The performance of the framework was tested with the dataset from 11 subjects under the mVEP-based BCI paradigm. The results indicate the satisfactory effectiveness and efficiency of the proposed method.
The structure of this paper is as follows: The framework is introduced in Section
For most of the current BCI classifiers, training is usually implemented before the online experiment; that is, the training and test are not interactive [
Classical classification flowchart for BCI tasks.
The diagram reveals that the training set is usually fixed after the training procedure, and no new samples in the test set are adaptively updated into in the training set. For an online BCI system, the training set may be collected on different days, and the experiment may last for a relatively long time. Inevitably, the patterns according to the specific tasks may vary over time due to the nonstationarity and nonlinearity of EEG signals [
Considering that the individual subject’s state will vary during the experiment, it is beneficial to adapt the classifier to new data involving the varying states and to retrain it [
SVM can provide the ability to discern how reliable an assigned label of a test sample is [ The variance of subject’s states will lead to classifier bias. The classifier calibration needs to be performed during a certain interval. The training set size cannot be too large for classifier training.
Based on these three assumptions, we proposed an adaptive framework for classifier calibration for a mVEP-based BCI system. The framework is shown in Figure
Framework for adaptive classifier calibration. Session
The “new training set generation” process is the core of this framework and determines the performance of online BCI systems. If it is removed, the framework presented in Figure
The procedure to generate the new training set.
In Figure
Considering that the subject’s state will not greatly change in a relatively short time period, the calibration is performed at a certain time interval. In the current study, we adaptively updated the training set at a certain number of experiment blocks. Each block consisted of five trials that lasted for 1.5 s. With this framework, some new reliable samples could be integrated into the training set, while some old samples were excluded from the training set. In our work, SVM is used to classify the samples based on the expanded training set, and other classifiers, such as linear discriminate analysis (LDA) [
Eleven subjects (three females and eight males, age 23.6 ± 1.2 years) participated in the experiment. They had either normal vision or corrected-to-normal vision. The Institution Research Ethics Board of the University of Electronic Science and Technology of China approved the experimental protocol. All the subjects read and signed an informed consent form before they participated in the experiment.
A 14 in LCD monitor with a 1280 × 1024 resolution and 60 Hz refresh rate was used to present the visual stimulus graphical user interface (GUI) with a visual field of 30° × 19° on the screen, as shown in Figure
Graphical user interface for the offline data recording for mVEP-based BCI experiment. The number “5” in the center indicates the target button that subjects should gaze at. The red vertical line moves leftward with a random order in each of the six buttons to form the motion-onset stimulus.
For each button, the red line appeared and moved from the right side of the rectangle and disappeared at the leftmost side. The entire process formed a brief motion-onset stimulus and took 140 ms with a 60 ms interval between the consecutive move processes. Each motion-onset stimulus appears randomly in the corresponding virtual button, and all the stimuli appeared before others were repeated. A trial had six successive stimulus periods corresponding to the six buttons. Specifically, a trial included a series of six red vertical moving lines across each virtual button successively. Therefore, when there was a 300 ms interval between two trials, each trial lasted for 1.5 s, as shown in Figure
Timing scheme of the mVEP experiment. Each block contains five trials. In each trial, the motion stimulus appears in the virtual button for 140 ms. There is a 60 ms interval between two consecutive stimuli and a 300 ms interval between two consecutive trials.
In the experiment process, each subject was asked to focus on the button presented in the center of the GUI, where the random number appeared. And the subjects were required to calculate mentally the number of moving stimulus occurrences in the target button. A total of 72 blocks (including 360 trials) were collected for each subject in two separate sessions, and there is a 2 min interval for rest between the sessions. In the following process, the first session was used as the training set, and the second session was used as the test set. For the training set, we averaged five trials for each virtual button in each block. Then, we could get one target stimulation sample and five standard stimulation samples, where the sample in the current work refers to the 0.5 s long EEG recording corresponding to the stimulus. One standard stimulation sample was randomly selected and combined with the target stimulation sample as two samples. Thus, the data collected from each subject contain 36 pairs of samples to constitute the training set. For the test set, we also averaged the five trials for each virtual button, resulting in one target stimulation sample and five standard stimulation samples, and then we totally obtained six samples for one block in the test set. It was a binary classification problem for mVEP recognition. We needed to conduct six times the two classifications, and then we compared these output values to recognize the button at which the subject gazed. In this study, the accuracy was used to measure the subjects’ performance, which is the ratio of the correctly classified blocks to the total blocks in the test set. It is obvious that the higher the recognition accuracy, the better the performance of mVEP-BCI.
By using a Symtop amplifier (Symtop Instrument, Beijing, China), eight Ag/AgCl electrodes (O3, O4, P3, P4, CP1, CP2, CP3, and CP4) from an extended 10–20 system were placed for EEG recordings. AFz electrode was adopted as reference. The EEG signals were sampled at 1000 Hz. There usually was noise contaminating the scalp-recorded EEG signals, and, in our work, those samples with absolute amplitude above the 50
This section details the performance evaluation of the proposed approach under various conditions based on the accuracy and information transfer rate. The accuracy is defined as the ratio of the number of correctly recognized targets to the number of targets overall. Besides accuracy, the corresponding information transfer rate (ITR) is another standard criterion to measure the BCI performance. Generally, ITR is defined as
As the subject’s state may change during certain intervals, in this section, the effect of the calibration interval on the classifier performance is explored. Specifically, we study the performance of the classifier when it is calibrated with different numbers of blocks. Table
Performance of both calibrations when the classifier is calibrated with a different number of blocks.
Subjects | Adaptive calibrationby SVM and fCM (accuracy (%)/ITR) | SVM | ||
---|---|---|---|---|
4 | 6 | 9 | ||
S1 | 86.1/13.4 | 81.5/11.7 | 83.3/12.4 | 83.3/12.4 |
S2 | 94.4/17.1 | 94.4/17.1 | 91.7/15.8 | 91.7/15.8 |
S3 | 69.4/7.9 | 72.2/8.7 | 72.2/8.7 | 66.7/7.2 |
S4 | 94.4/17.1 | 97.2/18.7 | 94.4/17.1 | 91.7/15.8 |
S5 | 77.8/10.4 | 80.6/11.4 | 77.8/10.4 | 75/9.5 |
S6 | 91.7/15.8 | 88.9/14.6 | 88.9/14.6 | 88.9/14.6 |
S7 | 94.4/17.1 | 94.4/17.1 | 91.7/15.8 | 91.7/15.8 |
S8 | 94.4/17.1 | 94.4/17.1 | 91.7/15.8 | 88.9/14.6 |
S9 | 94.4/17.1 | 94.4/17.1 | 94.4/17.1 | 94.4/17.1 |
S10 | 83.3/12.4 | 77.8/10.4 | 80.6/11.4 | 77.8/10.4 |
S11 | 91.7/15.8 | 94.4/17.1 | 94.4/17.1 | 91.7/15.8 |
| ||||
Mean ± std | | | | |
Performance of single SVM calibration when the classifier is calibrated with different numbers of blocks.
Subjects | Adaptive calibration by SVM (accuracy (%)/ITR) | SVM | ||
---|---|---|---|---|
4 | 6 | 9 | ||
S1 | 80.6/11.4 | 80.6/11.4 | 83.3/12.4 | 83.3/12.4 |
S2 | 94.4/17.1 | 91.7/15.8 | 91.7/15.8 | 91.7/15.8 |
S3 | 63.9/6.4 | 66.7/7.2 | 63.9/6.4 | 66.7/7.2 |
S4 | 88.9/14.6 | 91.7/15.8 | 88.9/14.6 | 91.7/15.8 |
S5 | 75/9.5 | 72.2/8.7 | 75/9.5 | 75/9.5 |
S6 | 88.9/14.6 | 86.1/13.4 | 88.9/14.6 | 88.9/14.6 |
S7 | 91.7/15.8 | 88.9/14.6 | 91.7/15.8 | 91.7/15.8 |
S8 | 88.9/14.6 | 91.7/15.8 | 88.9/14.6 | 88.9/14.6 |
S9 | 91.7/15.8 | 94.4/17.1 | 94.4/17.1 | 94.4/17.1 |
S10 | 83.3/12.4 | 80.6/11.4 | 75/9.5 | 77.8/10.4 |
S11 | 88.9/14.6 | 91.7/15.8 | 88.9/14.6 | 91.7/15.8 |
| ||||
Mean ± std | | | | |
Performance of single calibration when the classifier is calibrated with different numbers of blocks.
Subjects | Adaptive calibration by fCM (accuracy (%)/ITR) | SVM | ||
---|---|---|---|---|
4 | 6 | 9 | ||
S1 | 77.8/10.4 | 83.3/12.4 | 77.8/10.4 | 83.3/12.4 |
S2 | 91.7/15.8 | 91.7/15.8 | 88.9/14.6 | 91.7/15.8 |
S3 | 66.7/7.2 | 69.4/7.9 | 66.7/7.2 | 66.7/7.2 |
S4 | 91.7/15.8 | 88.9/14.6 | 88.9/14.6 | 91.7/15.8 |
S5 | 77.8/10.4 | 75/9.5 | 75/9.5 | 75/9.5 |
S6 | 94.4/17.1 | 91.7/15.8 | 88.9/14.6 | 88.9/14.6 |
S7 | 86.1/13.4 | 88.9/14.6 | 86.1/13.4 | 91.7/15.8 |
S8 | 88.9/14.6 | 86.1/13.4 | 88.9/14.6 | 88.9/14.6 |
S9 | 91.7/15.8 | 94.4/17.1 | 94.4/17.1 | 94.4/17.1 |
S10 | 77.8/10.4 | 75/9.5 | 77.8/10.4 | 77.8/10.4 |
S11 | 94.4/17.1 | 91.7/15.8 | 94.4/17.1 | 91.7/15.8 |
| ||||
Mean ± std | | | | |
In this subsection, the influence of the threshold for reliable sample selection on the calibration performance is explored. Five values, that is, 0.6, 0.65, 0.7, 0.75, and 0.8, were tested. The calibration was performed every four blocks. Table
Performance of both calibrations when the threshold for reliable sample selection takes different values.
Subjects | Adaptive calibration by SVM and fCM (accuracy (%)/ITR) | SVM | ||||
---|---|---|---|---|---|---|
0.6 | 0.65 | 0.7 | 0.75 | 0.8 | ||
S1 | 83.3/12.4 | 83.3/12.4 | 86.1/13.4 | 86.1/13.4 | 83.3/12.4 | 83.3/12.4 |
S2 | 94.4/17.1 | 91.7/15.8 | 94.4/17.1 | 94.4/17.1 | 94.4/17.1 | 91.7/15.8 |
S3 | 72.2/8.7 | 69.4/7.9 | 69.4/7.9 | 69.4/7.9 | 66.7/7.2 | 66.7/7.2 |
S4 | 94.4/17.1 | 91.7/15.8 | 91.7/15.8 | 94.4/17.1 | 94.4/17.1 | 91.7/15.8 |
S5 | 80.6/11.4 | 80.6/11.4 | 80.6/11.4 | 77.8/10.4 | 77.8/10.4 | 75/9.5 |
S6 | 88.9/14.6 | 88.9/14.6 | 91.7/15.8 | 91.7/15.8 | 91.7/15.8 | 88.9/14.6 |
S7 | 88.9/14.6 | 94.4/17.1 | 91.7/15.8 | 94.4/17.1 | 91.7/15.8 | 91.7/15.8 |
S8 | 91.7/15.8 | 91.7/15.8 | 94.4/17.1 | 94.4/17.1 | 91.7/15.8 | 88.9/14.6 |
S9 | 97.2/18.7 | 94.4/17.1 | 94.4/17.1 | 94.4/17.1 | 94.4/17.1 | 94.4/17.1 |
S10 | 80.6/11.4 | 83.3/12.4 | 80.6/11.4 | 83.3/12.4 | 80.6/11.4 | 77.8/10.4 |
S11 | 94.4/17.1 | 94.4/17.1 | 94.4/17.1 | 91.7/15.8 | 91.7/15.8 | 91.7/15.8 |
| ||||||
Mean ± std | | | | | | |
Performance of single SVM calibration when the threshold for reliable sample selection takes different values.
Subjects | Adaptive calibration by SVM (accuracy (%)/ITR) | SVM | ||||
---|---|---|---|---|---|---|
0.6 | 0.65 | 0.7 | 0.75 | 0.8 | ||
S1 | 80.6/11.4 | 83.3/12.4 | 80.6/11.4 | 80.6/11.4 | 80.6/11.4 | 83.3/12.4 |
S2 | 94.4/17.1 | 94.4/17.1 | 91.7/15.8 | 94.4/17.1 | 94.4/17.1 | 91.7/15.8 |
S3 | 63.9/6.4 | 66.7/7.2 | 66.7/7.2 | 63.9/6.4 | 61.1/5.7 | 66.7/7.2 |
S4 | 86.1/13.4 | 88.9/14.6 | 88.9/14.6 | 88.9/14.6 | 91.7/15.8 | 91.7/15.8 |
S5 | 77.8/10.4 | 80.6/11.4 | 75/9.5 | 75/9.5 | 77.8/10.4 | 75/9.5 |
S6 | 88.9/14.6 | 86.1/13.4 | 88.9/14.6 | 88.9/14.6 | 88.9/14.6 | 88.9/14.6 |
S7 | 91.7/15.8 | 91.7/15.8 | 91.7/15.8 | 91.7/15.8 | 91.7/15.8 | 91.7/15.8 |
S8 | 86.1/13.4 | 88.9/14.6 | 88.9/14.6 | 88.9/14.6 | 88.9/14.6 | 88.9/14.6 |
S9 | 94.4/17.1 | 91.7/15.8 | 91.7/15.8 | 91.7/15.8 | 91.7/15.8 | 94.4/17.1 |
S10 | 77.8/10.4 | 80.6/11.4 | 83.3/12.4 | 83.3/12.4 | 80.6/11.4 | 77.8/10.4 |
S11 | 91.7/15.8 | 91.7/15.8 | 91.7/15.8 | 88.9/14.6 | 88.9/14.6 | 91.7/15.8 |
| ||||||
Mean ± std | | | | | | |
Performance of single fCM calibration when the threshold for reliable sample selection takes different values.
Subjects | Adaptive calibration by fCM (accuracy (%)/ITR) | SVM | ||||
---|---|---|---|---|---|---|
0.6 | 0.65 | 0.70 | 0.75 | 0.8 | ||
S1 | 83.3/12.4 | 80.6/11.4 | 77.8/10.4 | 77.8/10.4 | 80.6/11.4 | 83.3/12.4 |
S2 | 91.7/15.8 | 91.7/15.8 | 94.4/17.1 | 91.7/15.8 | 94.4/17.1 | 91.7/15.8 |
S3 | 63.9/6.4 | 66.7/7.2 | 66.7/7.2 | 66.7/7.2 | 72.2/8.7 | 66.7/7.2 |
S4 | 91.7/15.8 | 91.7/15.8 | 88.9/14.6 | 91.7/15.8 | 88.9/14.6 | 91.7/15.8 |
S5 | 77.8/10.4 | 75/9.5 | 77.8/10.4 | 77.8/10.4 | 75/9.5 | 75/9.5 |
S6 | 91.7/15.8 | 91.7/15.8 | 91.7/15.8 | 94.4/17.1 | 86.1/13.4 | 88.9/14.6 |
S7 | 88.9/14.6 | 88.9/14.6 | 88.9/14.6 | 86.1/13.4 | 86.1/13.4 | 91.7/15.8 |
S8 | 88.9/14.6 | 86.1/13.4 | 86.1/13.4 | 88.9/14.6 | 88.9/14.6 | 88.9/14.6 |
S9 | 94.4/17.1 | 91.7/15.8 | 91.7/15.8 | 91.7/15.8 | 91.7/15.8 | 94.4/17.1 |
S10 | 77.8/10.4 | 80.6/11.4 | 77.8/10.4 | 77.8/10.4 | 77.8/10.4 | 77.8/10.4 |
S11 | 91.7/15.8 | 91.7/15.8 | 94.4/17.1 | 94.4/17.1 | 91.7/15.8 | 91.7/15.8 |
| ||||||
Mean ± std | | | | | | |
The calibration of the classifier is an open issue for BCI online systems, and the application of the information contained in the new samples is one feasible solution to this issue [
As shown in Table
As shown in Table
The reliability of the selected sample is crucial for the classifier calibration. The performance improvement of the calibration approach is mainly due to the use of the information in the new samples to retrain the classifier. In this framework, the combination of two different approaches can reflect the different aspects of samples to mine the information hidden in the new samples. As shown in Table
In summary, Tables
Number of samples updated and ratio of correctly recognized samples by three methods.
Session | Adaptive calibration by SVM | Adaptive calibration by fCM | Fusion adaptive calibration | ||||||
---|---|---|---|---|---|---|---|---|---|
A | B | C | A | B | C | A | B | C | |
2 | 13 | 15 | 0.867 | 4 | 6 | 0.667 | 3 | 4 | 0.750 |
3 | 9 | 14 | 0.643 | 3 | 5 | 0.600 | 4 | 6 | 0.667 |
4 | 10 | 13 | 0.769 | 2 | 4 | 0.500 | 3 | 4 | 0.750 |
5 | 11 | 14 | 0.786 | 3 | 5 | 0.600 | 4 | 5 | 0.800 |
6 | 13 | 15 | 0.867 | 5 | 6 | 0.833 | 4 | 4 | 1.000 |
7 | 10 | 13 | 0.769 | 4 | 5 | 0.800 | 3 | 4 | 0.750 |
8 | 9 | 11 | 0.818 | 6 | 6 | 1.000 | 3 | 3 | 1.000 |
9 | 12 | 14 | 0.857 | 7 | 7 | 1.000 | 3 | 3 | 1.000 |
| |||||||||
Sum | | | | | | | | | |
A and B denote the number of correctly labeled samples and the total number of samples updated into the training set, and C denotes the ratio of A and B.
The identification accuracy of Subject 1 in each session among the three calibration methods.
From Table
After new samples were added to the training set, the clip technique was used to remove the old samples that were recorded a long time before current blocks. This technique facilitates the BCI online system in two ways. First, the removal of the old samples will be helpful to track the subject’s state, because those old samples may represent the different subject’s stage from the current stage, and their utilization for training could distort the classifier. Second, the online system requires a not-too-large training set for effective training [
The result in this work is offline analysis for mVEP-BCI data of our lab. We will transplant this framework to our BCI online system in the future. Moreover, SVM is used in the current version and other classifiers, such as LDA [
In all, the above results demonstrate that the proposed adaptive calibration framework, which was first used in the mVEP-BCI system, can improve the BCI classifier performance. The core of the proposed framework is adaptively updating the training set and recalibrating the classifier. One way of updating is to add the novel information that can reflect the subject’s current state to the training set, and another way is to remove the old information from the training set. By merging information in the new samples with the training set, the classifier could track the changes of the subject’s states. The feasibility and effectiveness were verified by the real offline EEG data. Accordingly, the proposed framework is a promising methodology for adaptively improving the mVEP-based BCI system, and it could be generalized to other BCI modalities.
The authors declare that there are no conflicts of interest regarding the publication of this paper.
This work was supported in part by the National Nature Science Foundation of China (nos. 61522105 and 81330032), the Open Project Foundation of Information Technology Research Base of Civil Aviation Administration of China (no. CAAC-ITRB-201607), the 863 Project (2012AA011601), Chengdu’s Huimin Projects of Science and Technology in 2013, and Longshan Academic Talent Research Supporting Program of SWUST (no. 17LZX692).