节点文献
采用SKM与Transformer的多维脑电情感识别研究
Multidimensional EEG emotion recognition using SKM and Transformer
【摘要】 针对当前深度学习在脑电情感识别研究中的应用,存在对不同维度脑电信号差异化信息分析不足及特征提取能力有限等问题,提出一种多维形式输入信号的双路并行网络。提出SKM(改进的SK-MiniXception网络),在保证模型表征能力的同时结合注意力机制筛选出四维脑电信号中情感表达程度更为强烈的通道信息;提出BAGRU-BLS模块提取局部时间特征并优化模型输出;使用改进的Transformer网络提取二维脑电信号的全文时频信息;提出自适应特征融合模块进行特征融合并进行情感分类实验和重度抑郁症检测实验。实验结果显示,并行网络模型在DEAP公共数据集的价效-唤醒维度的四分类准确率达到96.13%,在MODMA数据集重度抑郁症检测实验中准确率达97.51%,相较于经典的卷积神经网络和循环神经网络模型均有明显提升。
【Abstract】 Although significant progress has been made in the current research on EEG emotion recognition based on deep learning, there are still many challenges, the most prominent of which are the lack of effective analysis ability and insufficient feature extraction of differentiated information of EEG signals in different dimensions. In order to solve the above problems, this paper proposes a dual-channel parallel neural network architecture with multi-dimensional input signals. Firstly, the multi-channel EEG signals are converted into a series of time-frequency features and reconstructed into a multi-dimensional feature matrix to capture the multi-dimensional information in the EEG signals more comprehensively. Subsequently, these feature matrices are fed into two independent processing paths in parallel to achieve parallel and efficient computation.In the first path of the dual-path parallel network, SKM(an optimized SK-MiniXception network) is introduced, which not only retains the powerful ability of the traditional convolutional structure in feature extraction, but also cleverly incorporates the attention mechanism. The design enables the model to focus on those EEG signal channels that contribute more significantly to emotional expression at each stage of model training, and improve the accuracy of emotion recognition by assigning higher weights to these channels. Meanwhile, the BAGRU-BLS module is proposed, which makes full use of the advantages of the bidirectional gated recurrent unit in processing time series data, and uses the attention mechanism to strengthen the weight of time period features with strong emotional relationship expression, and optimizes the local time feature extraction process by combining with the width learning module. This module can not only effectively capture the dynamic change characteristics of four-dimensional EEG signals, but also reduce the risk of overfitting the model during the training process. In the second path of the model, in order to further mine the global time information of the two-dimensional EEG signal, the two-dimensional EEG signal feature matrix processed by the one-dimensional convolutional layer is put into the Transformer network. Transformer network is known for its powerful global context modeling ability, which can extract more comprehensive and coherent time-frequency information across long-distance dependencies, so as to avoid the problem of time information loss that may occur in the process of dimension transformation of feature matrix. Finally, in order to integrate the information from different paths, an adaptive feature fusion module is designed. The module can intelligently evaluate the importance of each feature, and reduce the redundant information output of the model through nonlinear combination, so that the final sentiment classification and depression detection results are more accurate and reliable.Our experimental results show that the proposed dual-channel parallel network dilivers excellent performance on multiple datasets. In the sentiment classification experiment of DEAP public dataset, the model achieves an accuracy of 96.13% in the four-classification task of valence-efficiency-wake-up dimension. In the detection experiment of major depression on the MODMA dataset, the accuracy rate reaches 97.51% was obtained. These results are significantly better than the traditional convolutional neural network and recurrent neural network models, which fully verify the effectiveness of the proposed model in the field of EEG emotion recognition.
【Key words】 EEG emotion recognition; pression detection; feature reconstruction; attention mechanisms; Transformer;
- 【文献出处】 重庆理工大学学报(自然科学) ,Journal of Chongqing University of Technology(Natural Science) , 编辑部邮箱 ,2024年07期
- 【分类号】TN911.7;R318
- 【下载频次】23