Background: Working memory is an executive function that plays an important role in many aspects of daily life, and its impairment in patients with attention-deficit/hyperactivity disorder (ADHD) affects quality of li...Background: Working memory is an executive function that plays an important role in many aspects of daily life, and its impairment in patients with attention-deficit/hyperactivity disorder (ADHD) affects quality of life. The dorsolateral prefrontal cortex (DLPFC) has been a good target site for transcranial direct current stimulation (tDCS) due to its intense involvement in working memory. In our 2018 study, tDCS improved visual-verbal working memory in healthy subjects. Objective: This study examines the effects of tDCS on ADHD patients, particularly on verbal working memory. Methods: We conducted an experiment involving verbal working memory of two modalities, visual and auditory, and a sustained attention task that could affect working memory in 9 ADHD patients. Active or sham tDCS was applied to the left DLPFC in a single-blind crossover design. Results: tDCS significantly improved the accuracy of visual-verbal working memory. In contrast, tDCS did not affect auditory-verbal working memory and sustained attention. Conclusion: tDCS to the left DLPFC improved visual-verbal working memory in ADHD patients, with important implications for potential ADHD treatments.展开更多
针对现有的数字化档案多标签分类方法存在分类标签之间缺少关联性的问题,提出一种用于档案多标签分类的深层神经网络模型ALBERT-Seq2Seq-Attention.该模型通过ALBERT(A Little BERT)预训练语言模型内部多层双向的Transfomer结构获取进...针对现有的数字化档案多标签分类方法存在分类标签之间缺少关联性的问题,提出一种用于档案多标签分类的深层神经网络模型ALBERT-Seq2Seq-Attention.该模型通过ALBERT(A Little BERT)预训练语言模型内部多层双向的Transfomer结构获取进行文本特征向量的提取,并获得上下文语义信息;将预训练提取的文本特征作为Seq2Seq-Attention(Sequence to Sequence-Attention)模型的输入序列,构建标签字典以获取多标签间的关联关系.将分类模型在3种数据集上分别进行对比实验,结果表明:模型分类的效果F1值均超过90%.该模型不仅能提高档案文本的多标签分类效果,也能关注标签之间的相关关系.展开更多
F_(10.7)指数是太阳活动的重要指标,准确预测F_(10.7)指数有助于预防和缓解太阳活动对无线电通信、导航和卫星通信等领域的影响.基于F_(10.7)射电流量的特性,在双向长短时记忆网络(Bidirectional Long Short-Term Memory Network,BiLSTM...F_(10.7)指数是太阳活动的重要指标,准确预测F_(10.7)指数有助于预防和缓解太阳活动对无线电通信、导航和卫星通信等领域的影响.基于F_(10.7)射电流量的特性,在双向长短时记忆网络(Bidirectional Long Short-Term Memory Network,BiLSTM)基础上融入注意力机制(Attention),提出了一种基于BiLSTM-Attention的F_(10.7)预报模型.在加拿大DRAO数据集上其平均绝对误差(MAE)为5.38,平均绝对百分比误差(MAPE)控制在5%以内,相关系数(R)高达0.987,与其他RNN模型相比拥有优越的预测性能.针对中国廊坊L&S望远镜观测的F_(10.7)数据集,提出了一种转换平均校准(Conversion Average Calibration,CAC)方法进行数据预处理,处理后的数据与DRAO数据集具有较高的相关性.基于该数据集对比分析了RNN系列模型的预报效果,实验结果表明,BiLSTM-Attention和BiLSTM两种模型在预测F_(10.7)指数方面具有较好的优势,表现出较好的预测性能和稳定性.展开更多
Accurate load forecasting forms a crucial foundation for implementing household demand response plans andoptimizing load scheduling. When dealing with short-term load data characterized by substantial fluctuations,a s...Accurate load forecasting forms a crucial foundation for implementing household demand response plans andoptimizing load scheduling. When dealing with short-term load data characterized by substantial fluctuations,a single prediction model is hard to capture temporal features effectively, resulting in diminished predictionaccuracy. In this study, a hybrid deep learning framework that integrates attention mechanism, convolution neuralnetwork (CNN), improved chaotic particle swarm optimization (ICPSO), and long short-term memory (LSTM), isproposed for short-term household load forecasting. Firstly, the CNN model is employed to extract features fromthe original data, enhancing the quality of data features. Subsequently, the moving average method is used for datapreprocessing, followed by the application of the LSTM network to predict the processed data. Moreover, the ICPSOalgorithm is introduced to optimize the parameters of LSTM, aimed at boosting the model’s running speed andaccuracy. Finally, the attention mechanism is employed to optimize the output value of LSTM, effectively addressinginformation loss in LSTM induced by lengthy sequences and further elevating prediction accuracy. According tothe numerical analysis, the accuracy and effectiveness of the proposed hybrid model have been verified. It canexplore data features adeptly, achieving superior prediction accuracy compared to other forecasting methods forthe household load exhibiting significant fluctuations across different seasons.展开更多
The advent of self-attention mechanisms within Transformer models has significantly propelled the advancement of deep learning algorithms,yielding outstanding achievements across diverse domains.Nonetheless,self-atten...The advent of self-attention mechanisms within Transformer models has significantly propelled the advancement of deep learning algorithms,yielding outstanding achievements across diverse domains.Nonetheless,self-attention mechanisms falter when applied to datasets with intricate semantic content and extensive dependency structures.In response,this paper introduces a Diffusion Sampling and Label-Driven Co-attention Neural Network(DSLD),which adopts a diffusion sampling method to capture more comprehensive semantic information of the data.Additionally,themodel leverages the joint correlation information of labels and data to introduce the computation of text representation,correcting semantic representationbiases in thedata,andincreasing the accuracyof semantic representation.Ultimately,the model computes the corresponding classification results by synthesizing these rich data semantic representations.Experiments on seven benchmark datasets show that our proposed model achieves competitive results compared to state-of-the-art methods.展开更多
文摘Background: Working memory is an executive function that plays an important role in many aspects of daily life, and its impairment in patients with attention-deficit/hyperactivity disorder (ADHD) affects quality of life. The dorsolateral prefrontal cortex (DLPFC) has been a good target site for transcranial direct current stimulation (tDCS) due to its intense involvement in working memory. In our 2018 study, tDCS improved visual-verbal working memory in healthy subjects. Objective: This study examines the effects of tDCS on ADHD patients, particularly on verbal working memory. Methods: We conducted an experiment involving verbal working memory of two modalities, visual and auditory, and a sustained attention task that could affect working memory in 9 ADHD patients. Active or sham tDCS was applied to the left DLPFC in a single-blind crossover design. Results: tDCS significantly improved the accuracy of visual-verbal working memory. In contrast, tDCS did not affect auditory-verbal working memory and sustained attention. Conclusion: tDCS to the left DLPFC improved visual-verbal working memory in ADHD patients, with important implications for potential ADHD treatments.
文摘针对现有的数字化档案多标签分类方法存在分类标签之间缺少关联性的问题,提出一种用于档案多标签分类的深层神经网络模型ALBERT-Seq2Seq-Attention.该模型通过ALBERT(A Little BERT)预训练语言模型内部多层双向的Transfomer结构获取进行文本特征向量的提取,并获得上下文语义信息;将预训练提取的文本特征作为Seq2Seq-Attention(Sequence to Sequence-Attention)模型的输入序列,构建标签字典以获取多标签间的关联关系.将分类模型在3种数据集上分别进行对比实验,结果表明:模型分类的效果F1值均超过90%.该模型不仅能提高档案文本的多标签分类效果,也能关注标签之间的相关关系.
文摘F_(10.7)指数是太阳活动的重要指标,准确预测F_(10.7)指数有助于预防和缓解太阳活动对无线电通信、导航和卫星通信等领域的影响.基于F_(10.7)射电流量的特性,在双向长短时记忆网络(Bidirectional Long Short-Term Memory Network,BiLSTM)基础上融入注意力机制(Attention),提出了一种基于BiLSTM-Attention的F_(10.7)预报模型.在加拿大DRAO数据集上其平均绝对误差(MAE)为5.38,平均绝对百分比误差(MAPE)控制在5%以内,相关系数(R)高达0.987,与其他RNN模型相比拥有优越的预测性能.针对中国廊坊L&S望远镜观测的F_(10.7)数据集,提出了一种转换平均校准(Conversion Average Calibration,CAC)方法进行数据预处理,处理后的数据与DRAO数据集具有较高的相关性.基于该数据集对比分析了RNN系列模型的预报效果,实验结果表明,BiLSTM-Attention和BiLSTM两种模型在预测F_(10.7)指数方面具有较好的优势,表现出较好的预测性能和稳定性.
基金the Shanghai Rising-Star Program(No.22QA1403900)the National Natural Science Foundation of China(No.71804106)the Noncarbon Energy Conversion and Utilization Institute under the Shanghai Class IV Peak Disciplinary Development Program.
文摘Accurate load forecasting forms a crucial foundation for implementing household demand response plans andoptimizing load scheduling. When dealing with short-term load data characterized by substantial fluctuations,a single prediction model is hard to capture temporal features effectively, resulting in diminished predictionaccuracy. In this study, a hybrid deep learning framework that integrates attention mechanism, convolution neuralnetwork (CNN), improved chaotic particle swarm optimization (ICPSO), and long short-term memory (LSTM), isproposed for short-term household load forecasting. Firstly, the CNN model is employed to extract features fromthe original data, enhancing the quality of data features. Subsequently, the moving average method is used for datapreprocessing, followed by the application of the LSTM network to predict the processed data. Moreover, the ICPSOalgorithm is introduced to optimize the parameters of LSTM, aimed at boosting the model’s running speed andaccuracy. Finally, the attention mechanism is employed to optimize the output value of LSTM, effectively addressinginformation loss in LSTM induced by lengthy sequences and further elevating prediction accuracy. According tothe numerical analysis, the accuracy and effectiveness of the proposed hybrid model have been verified. It canexplore data features adeptly, achieving superior prediction accuracy compared to other forecasting methods forthe household load exhibiting significant fluctuations across different seasons.
基金the Communication University of China(CUC230A013)the Fundamental Research Funds for the Central Universities.
文摘The advent of self-attention mechanisms within Transformer models has significantly propelled the advancement of deep learning algorithms,yielding outstanding achievements across diverse domains.Nonetheless,self-attention mechanisms falter when applied to datasets with intricate semantic content and extensive dependency structures.In response,this paper introduces a Diffusion Sampling and Label-Driven Co-attention Neural Network(DSLD),which adopts a diffusion sampling method to capture more comprehensive semantic information of the data.Additionally,themodel leverages the joint correlation information of labels and data to introduce the computation of text representation,correcting semantic representationbiases in thedata,andincreasing the accuracyof semantic representation.Ultimately,the model computes the corresponding classification results by synthesizing these rich data semantic representations.Experiments on seven benchmark datasets show that our proposed model achieves competitive results compared to state-of-the-art methods.