Human Activity Recognition (HAR) is an important way for lower limb exoskeleton robots to implement human-computer collaboration with users. Most of the existing methods in this field focus on a simple scenario recogn...Human Activity Recognition (HAR) is an important way for lower limb exoskeleton robots to implement human-computer collaboration with users. Most of the existing methods in this field focus on a simple scenario recognizing activities for specific users, which does not consider the individual differences among users and cannot adapt to new users. In order to improve the generalization ability of HAR model, this paper proposes a novel method that combines the theories in transfer learning and active learning to mitigate the cross-subject issue, so that it can enable lower limb exoskeleton robots being used in more complex scenarios. First, a neural network based on convolutional neural networks (CNN) is designed, which can extract temporal and spatial features from sensor signals collected from different parts of human body. It can recognize human activities with high accuracy after trained by labeled data. Second, in order to improve the cross-subject adaptation ability of the pre-trained model, we design a cross-subject HAR algorithm based on sparse interrogation and label propagation. Through leave-one-subject-out validation on two widely-used public datasets with existing methods, our method achieves average accuracies of 91.77% on DSAD and 80.97% on PAMAP2, respectively. The experimental results demonstrate the potential of implementing cross-subject HAR for lower limb exoskeleton robots.展开更多
Cognitive state detection using electroencephalogram(EEG)signals for various tasks has attracted significant research attention.However,it is difficult to further improve the performance of crosssubject cognitive stat...Cognitive state detection using electroencephalogram(EEG)signals for various tasks has attracted significant research attention.However,it is difficult to further improve the performance of crosssubject cognitive state detection.Further,most of the existing deep learning models will degrade significantly when limited training samples are given,and the feature hierarchical relationships are ignored.To address the above challenges,we propose an efficient interpretation model based on multiple capsule networks for cross-subject EEG cognitive state detection,termed as Efficient EEG-based Multi-Capsule Framework(E3GCAPS).Specifically,we use a selfexpression module to capture the potential connections between samples,which is beneficial to alleviate the sensitivity of outliers that are caused by the individual differences of cross-subject EEG.In addition,considering the strong correlation between cognitive states and brain function connection mode,the dynamic subcapsule-based spatial attention mechanism is introduced to explore the spatial relationship of multi-channel 1D EEG data,in which multichannel 1D data greatly improving the training efficiency while preserving the model performance.The effectiveness of the E3GCAPS is validated on the Fatigue-Awake EEG Dataset(FAAD)and the SJTU Emotion EEG Dataset(SEED).Experimental results show E3GCAPS can achieve remarkable results on the EEG-based cross-subject cognitive state detection under different tasks.展开更多
The rapid serial visual presentation(RSVP)paradigm has garnered considerable attention in brain–computer interface(BCI)systems.Studies have focused on using cross-subject electroencephalogram data to train cross-subj...The rapid serial visual presentation(RSVP)paradigm has garnered considerable attention in brain–computer interface(BCI)systems.Studies have focused on using cross-subject electroencephalogram data to train cross-subject RSVP detection models.In this study,we performed a comparative analysis of the top 5 deep learning algorithms used by various teams in the event-related potential competition of the BCI Controlled Robot Contest in World Robot Contest 2022.We evaluated these algorithms on the final data set and compared their performance in cross-subject RSVP detection.The results revealed that deep learning models can achieve excellent results with appropriate training methods when applied to cross-subject detection tasks.We discussed the limitations of existing deep learning algorithms in cross-subject RSVP detection and highlighted potential research directions.展开更多
本文提出一种跨被试的深度神经网络识别方法,应对运动想象脑电信号的非线性、非平稳特性.该方法首先计算协方差矩阵均值,将不同被试者样本集的协方差对齐至单位矩阵,提升样本的被试间泛化性.然后,将对齐后的样本输入至卷积神经网络中,...本文提出一种跨被试的深度神经网络识别方法,应对运动想象脑电信号的非线性、非平稳特性.该方法首先计算协方差矩阵均值,将不同被试者样本集的协方差对齐至单位矩阵,提升样本的被试间泛化性.然后,将对齐后的样本输入至卷积神经网络中,通过留一被试交叉验证法,构建跨被试的运动想象脑电信号识别方法.在BCI Competition IV dataset 2b公开数据集上进行实验,结果表明,新的方法在该数据集上取得了高的识别性能,且测试场景中的时间复杂度与现有方法相同.展开更多
Electroencephalogram(EEG)data depict various emotional states and reflect brain activity.There has been increasing interest in EEG emotion recognition in brain-computer interface systems(BCIs).In the World Robot Conte...Electroencephalogram(EEG)data depict various emotional states and reflect brain activity.There has been increasing interest in EEG emotion recognition in brain-computer interface systems(BCIs).In the World Robot Contest(WRC),the BCI Controlled Robot Contest successfully staged an emotion recognition technology competition.Three types of emotions(happy,sad,and neutral)are modeled using EEG signals.In this study,5 methods employed by different teams are compared.The results reveal that classical machine learning approaches and deep learning methods perform similarly in offline recognition,whereas deep learning methods perform better in online cross-subject decoding.展开更多
文摘Human Activity Recognition (HAR) is an important way for lower limb exoskeleton robots to implement human-computer collaboration with users. Most of the existing methods in this field focus on a simple scenario recognizing activities for specific users, which does not consider the individual differences among users and cannot adapt to new users. In order to improve the generalization ability of HAR model, this paper proposes a novel method that combines the theories in transfer learning and active learning to mitigate the cross-subject issue, so that it can enable lower limb exoskeleton robots being used in more complex scenarios. First, a neural network based on convolutional neural networks (CNN) is designed, which can extract temporal and spatial features from sensor signals collected from different parts of human body. It can recognize human activities with high accuracy after trained by labeled data. Second, in order to improve the cross-subject adaptation ability of the pre-trained model, we design a cross-subject HAR algorithm based on sparse interrogation and label propagation. Through leave-one-subject-out validation on two widely-used public datasets with existing methods, our method achieves average accuracies of 91.77% on DSAD and 80.97% on PAMAP2, respectively. The experimental results demonstrate the potential of implementing cross-subject HAR for lower limb exoskeleton robots.
基金supported by NSFC with grant No.62076083Firstly,the authors would like to express thanks to the Key Laboratory of Brain Machine Collaborative Intelligence of Zhejiang Province with grant No.2020E10010Industrial Neuroscience Laboratory of Sapienza University of Rome.
文摘Cognitive state detection using electroencephalogram(EEG)signals for various tasks has attracted significant research attention.However,it is difficult to further improve the performance of crosssubject cognitive state detection.Further,most of the existing deep learning models will degrade significantly when limited training samples are given,and the feature hierarchical relationships are ignored.To address the above challenges,we propose an efficient interpretation model based on multiple capsule networks for cross-subject EEG cognitive state detection,termed as Efficient EEG-based Multi-Capsule Framework(E3GCAPS).Specifically,we use a selfexpression module to capture the potential connections between samples,which is beneficial to alleviate the sensitivity of outliers that are caused by the individual differences of cross-subject EEG.In addition,considering the strong correlation between cognitive states and brain function connection mode,the dynamic subcapsule-based spatial attention mechanism is introduced to explore the spatial relationship of multi-channel 1D EEG data,in which multichannel 1D data greatly improving the training efficiency while preserving the model performance.The effectiveness of the E3GCAPS is validated on the Fatigue-Awake EEG Dataset(FAAD)and the SJTU Emotion EEG Dataset(SEED).Experimental results show E3GCAPS can achieve remarkable results on the EEG-based cross-subject cognitive state detection under different tasks.
基金the Special Projects in Key Fields Supported by the Technology Development Project of Guangdong Province(Grant No.2020ZDZX3018)the Special Fund for Science and Technology of Guangdong Province(Grant No.2020182)the Wuyi University and Hong Kong&Macao joint Research Project(Grant No.2019WGALH16)。
文摘The rapid serial visual presentation(RSVP)paradigm has garnered considerable attention in brain–computer interface(BCI)systems.Studies have focused on using cross-subject electroencephalogram data to train cross-subject RSVP detection models.In this study,we performed a comparative analysis of the top 5 deep learning algorithms used by various teams in the event-related potential competition of the BCI Controlled Robot Contest in World Robot Contest 2022.We evaluated these algorithms on the final data set and compared their performance in cross-subject RSVP detection.The results revealed that deep learning models can achieve excellent results with appropriate training methods when applied to cross-subject detection tasks.We discussed the limitations of existing deep learning algorithms in cross-subject RSVP detection and highlighted potential research directions.
文摘本文提出一种跨被试的深度神经网络识别方法,应对运动想象脑电信号的非线性、非平稳特性.该方法首先计算协方差矩阵均值,将不同被试者样本集的协方差对齐至单位矩阵,提升样本的被试间泛化性.然后,将对齐后的样本输入至卷积神经网络中,通过留一被试交叉验证法,构建跨被试的运动想象脑电信号识别方法.在BCI Competition IV dataset 2b公开数据集上进行实验,结果表明,新的方法在该数据集上取得了高的识别性能,且测试场景中的时间复杂度与现有方法相同.
基金This work was supported in part by the National Natural Science Foundation of China(Grant Nos.U21A20485,61976175).
文摘Electroencephalogram(EEG)data depict various emotional states and reflect brain activity.There has been increasing interest in EEG emotion recognition in brain-computer interface systems(BCIs).In the World Robot Contest(WRC),the BCI Controlled Robot Contest successfully staged an emotion recognition technology competition.Three types of emotions(happy,sad,and neutral)are modeled using EEG signals.In this study,5 methods employed by different teams are compared.The results reveal that classical machine learning approaches and deep learning methods perform similarly in offline recognition,whereas deep learning methods perform better in online cross-subject decoding.