An EEG-based motion-based experiment based on the EEG signal

This article allows sharing of users of the “Kiwi sits in the cloud” network

Project background

The essence of the “brain CPMPUTER (BCI) section is to establish a direct communication path between the brain and external devices, so the brain interface is also known as” the science of higher artificial intelligence”. The brain interface (BCI) understands the intention of people through computer information processing technology and transforms this intention To an external control command to achieve direct control of the brain in the outside world The current implementation of the Brain Machine interface mainly contains the following aspects [1]:

The output of the brain machine interface replaces the original central nervous system

② central nervous system output loss

③ Add a normal output of the central nervous system

④ improve normal central nervous system output

⑤ as a research tool to study the function of the central nervous system

The EEG signal electroencephalogram belongs to the non-invasive brain section technology. As a special and complex bioelectrical signal, it reflects brain function. Detecting changes in these potentials is very important for the study of brain function. Efficiently extracting the information contained in the electrical signals, you can more deeply understand the functional activity of the brain [2]. In recent years, the task of studying the electrical signals of the brain continues to grow. The model mainly includes: motion picture data, emotion recognition data, error-related potentials (ERRP), visual temptation potential (VEP), event-related potentials (ERPS), chronic cortical potentials (SCP), music and EEG, and melancholy / locomotor, clinical EEG, etc.

In this experiment the motion imagining task (MI-EEG) was studied. The challenge is that when people imagine exercising their limbs (or their muscles) without actually performing them, certain areas of the brain will still be active. Enabled. For: left, right, right arm, legs, tongue. By analyzing the brain’s electrical signals and detecting the activation effect of different regions of the brain, to determine the user’s intention, and then achieve direct communication and control between the human brain and external devices.

The data set used in this experiment is the EEG Motor Motor/image Data Set (EEGMMIDB) used in this experiment. Electrical data for 64 channels of the human brain. There are fourteen practice tests for the tester, including two minutes of basic testing (eyes open, eyes closed) and repeat the following four tasks:

  1. The target appears on the left or right side of the screen. Open the subject and tighten the opposite hand until the target is gone, then the subject relaxes.

  2. The target appears on the left or right side of the screen. The subjects imagined the opposite hand opening and pressing the opposite hand until the target disappeared, then the subject relaxed.

  3. The target appears at the top or bottom of the screen. Open the target or close both hands (if the target is on top) until the target is gone. Then the matter calmed down.

  4. The target appears at the top or bottom of the screen. Subjects imagine an opening and press their hands (if the target is on top) until the target disappears. Then the matter calmed down.

In this experiment, we selected two laboratory tests for two of sixty tasks. Each task has 4 seconds, the sampling rate is 160 times per second, and 64 channels of data are collected. The lab condition includes T0: static, T1: imagine left hand loose/tightened, T2: imagine right hand loose/tight.

38788d42cbacaf4c0bc553272778c586

targeting

Preprocessors receive electrical signals (EEG) to remove noise and interference

Post a preprocessing signal for feature extraction to get useful information

Create a classifier to classify three categories

Evaluate and compare the performance of the classifier and evaluate the role of feature extraction

in a

conduct the experiment

In order to correctly identify the EEG signal, the signal processing should include the following three parts: pre-processing, feature selection and extraction, and feature classification. Among them, the pre-processing algorithm is to use a spatial and temporal filter to filter the original signal to remove noise and spurious paths; The feature extraction algorithm is abstracted from the initial electrical signal of the brain, which can accurately distinguish between different thinking states, and can accurately distinguish between different thinking states. vector profiles; The classification algorithm is based on the judgment rules on the extracted feature vectors to obtain the best classification effect. Finally, we study and perform performance visualization and feature extraction of the classifier. The experiment process is shown in Figure 3.1.

e22c312af188ac97f78a1a5d8e2cdf0a

Workbook design

By reading literature [3] found that deep learning evolved in the MI-EEG maturity classification task as shown in Figure 3.1, mainly including RNN [5]CNN [4]GCN [6]. In this experiment, we developed a CNN-based classifier. CNN has several advantages in the Mi-Eeg classification problem. First, it can omit the feature extraction steps and can directly enter the preprocessing data; Secondly, CNN can learn from big data which special vital signs excel in big data processing. Mi-EEG data and performance is absolutely massive, with 640 * 64 data points in a single test. The deep learning library for eggs is on GitHub

The CNN classifier developed in this article is based on [4], including six convolutional layers, a maximum of two connected layers, and two full connected layers. The size of the data set is [64*60, 560], and divides the training and test set by a ratio of 7: 3. Printing 560 times is a 28×20 image as input. The sample size is 28 x 20 x 1 and the output is the class expected. Batch 64. Using the ADAM optimizer, the learning rate is 1 x 10^-5. The detailed network structure is shown in Fig. 3.20.

bb142db788748734515ed0efa05b3d83

After five hundred rounds of training, the accuracy rate of the training set reached 98.96%, and the classification accuracy rates of the three test set were 93.4%. The model performed well. From Figure 3.4, you can see that the loss function converges. to me [4]The correct class classification rate for 105 subjects with three experiments is between 93% and 94% which is consistent with the results of this experiment indicating the validity of the algorithm.

a214c8d0b12a7dbc537c58b16cfd72f0

efficiency mark

This experiment uses the four indices Accuracy, Recallability, Accuracy, and F-score to evaluate the performance of the model. Among them, the accuracy (computation), the proportion of all predictions (positive or negative), the higher the accuracy rate, which means the accuracy of the overall model; The proportion of correct predictions is positive (the proportion of all predictions as positive), the higher the accuracy, the higher the accuracy, the higher the accuracy in the positive sample; The prediction ratio is positive (correctly filling in all actual ratios); F grade value, mean value The number of F1 value is divided by the geometric mean, the higher the better, the above formulas. When the F1 score is equal to an hour, the true positive score is positive, that is, the accuracy and peer review are relatively increased, that is, the F1 score has a weighted accuracy and retrieval. In general, accuracy speed correlates negatively with reviews. One is high and the other is low. If both are low, there must be problems. In general, the accuracy and frequency of recall are inconsistent. The F1-metric is presented here as a composite metric, which is to balance the impact of accuracy and retrieval rate, comprehensively evaluating the classifier.

① True Positive (TP)

② false positive (FP)

③ True Negative (TN)

④ Negative error (FN)

7da72698464ed0080bcb044120f420e9

995ee56588131f4fc001ff05986dcd14

ff05086d9386b3342d57169abe738d92

1fcdf4d64e7379bb5e1bd8b4b4b86333

See github address (below), see source code.Hey, you

https://github.com/siyi-wind/machine-learning-course-projects/tree/master/EEG-MI%20classification

References

[1] Bruner C, Bierbaumer N, Blankertz B, et al. BNCI Horizon 2020: Towards a roadmap for the BCI community[J]. Brain Computer Interfaces, 2015, 2(1): 1–10.

[2] Li Yingye, Fan Feiyan, Chen Shengshi. Development of EEG Analysis in Cognitive Science Research % in Cognitive Research [J], 2006, 025(003): 321-324. 321-324.

[3] Padfield N, Zabalza J, Zhao H, et al. EEG-based brain-computer interfaces using motor imagery: techniques and challenges[J]. Probes, 2019, 19(6).

[4] He Y, Zhou L, Jia S, et al. A novel approach to decoding four-category motor imagery tasks for EEG via Scout ESI and CNN[J]. Journal of Neural Engineering, 2019, 17(1).

[5] Hou Y, Jia S, Zhang S, et al. Deep feature mining via attention-based BiLSTM-GCN for human motor image recognition[J]. 2020.

[6] Convolutional neural networks on graphs with fast localized spectral filtering

[7] Rajesh B.N. Rao Introduction to the brain interface [M]. Beijing Machine Industry. 2016-7

[8] Wang Hongtao. Zhou Heliang. Li Daqiang. Huiyuan. Online algorithm design and implementation based on left and right mathematical imagination. Data Collection and Processing, Volume 28, Issue 6, 829-833 November 2013

The article is taken from the internet for academic exchange only, not for business. If there are any violations and questions, please leave a message in the background!

If you need to reprint, please scan the WeChat QR code, and mark “Reprint”

Further reading

Brief description of the section introduction and brain function

Issue 2 | Biographies of experts and professors in the field of brain internalization

Great long article The current and future situation of Brain Machine interface technology!

tensorflow processing task

ICA deals with the electrical equipment of the brain

Favorites | Electrical Basics of the EEG and Summary

BCI Learning Exchange QQ Group brain machine interface:903290195

dc00bb2d2e1211a5cd0e1a5984360a73

Leave a Comment