Problem Description and Data Collection

In this sub-project, we were interested in the correlation between EEG signal and different actions applied to a same object. We focused on distinguishing different actions in single trial based on EEG signal. Further, we wanted to see whether we could predict the actions that the subject is conducting as fast as possible, which could help to improve the interaction between human and machine.

In order to figure out the representation of different actions in EEG, We were interested in the EEG signal in two different conditions: 1. the subject is actually doing the actions; and 2. the subject is watching videos of different actions and trying to recognizing them. We were interested in the similarity and difference between active actor and passive observer. Thus, we collected the EEG data of 1 subject doing three different actions with a sponge (flip, squeeze, and wash) and 1 subject watching videos of different people conducting these three different actions.

In the EEG experiment of the active actor, the subject was asked to conduct one of the three actions displayed below after hearing the instructions. For the EEG recording of the observer, the subject was asked to watch videos of different actions and tried to recognize them, which was similar to the psychophysic experiment in https://neuromorphs.net/nm/wiki/2015/Results/mfa/psychophysics.

Flip for actor Squeeze for actor Wash for actor
a. Flip b. Squeeze c. Wash
Figure 1. Example of three actions

Furthermore, in order to relate EEG signal to hand movement, we recorded the EEG signal and hand movement (using Cyberglove Systems) simultaneously of 1 subject doing these actions.

EEG with glove Figure 2. EEG experiment with glove

Easycap from BrainVision? was used to collect EEG data. Data from 64 electrode were sampled in 500 Hz.

The Method

We focused on the EEG data of the first actor experiment (without glove). The average time domain signals for three actions are in Fig. 3.

EEG signal

Figure 3. Average EEG signal

  1. Classification

Firstly, to serve as the baseline, we classified the actions based on the whole EEG signal in single trial. Seven features in frequency domain (power in 7 different bands: 1-8 Hz, 8-12 Hz, 12-30 Hz, 30-50 Hz, 50-70 Hz, 70-90 Hz, 90-100 Hz) for each electrode were extracted in each single trial. A multi-class Support-Vector-Machine (SVM) with linear kernel was used to classify the actions by the voting of 3 one v.s. one SVM.

EEG feature

Figure 4. Average EEG feature for three actions

2. Prediction A Recurrent Neural Network (RNN) was used to classify (predict) the actions in real time. In this RNN, input was first projected into 80 dimension feature (projection layer), and then the projection layer was feed to the recurrent layer, whose size is also 80 neural (for detail of RNN, please visit https://neuromorphs.net/nm/wiki/2015/Results/mfa/rnn).

Spectogram was applied to extract the feature in time domain. A window of 150 samples (0.3s) with 66.7% overlap (0.1s per frame) was used. Features were extracted as the power in 7 bands which were the same as the ones in SVM.

RNN feature

Figure 5. Average RNN feature (frequency versus time) in FC3


1. SVM classifier 10-fold cross-validation was applied. Features were first normalized by the mean and standard deviation extracted from training set. The result is in Fig. 6.

SVM result

Figure 6. Confusion matrix by SVM classifier. Numbers in red represent 95% confidence interval.

2. RNN predicotr 10-fold cross-validation was applied. Features were first normalized by the mean and standard deviation extracted from training set. The result is in Fig. 7.

RNN result

Figure 7. RNN result (accuracy over time). Error bars represent 95% confidence interval.