摘要
为了解决原始AdaBoost算法对弱分类器的权重无法动态调整以及对多次分类错误的样本设置过高权重的问题,提出引入注意力(Attention)机制的AdaBoost算法。首先将所有样本输入到第一个注意力层,利用神经网络为每个样本打分,然后将所有弱分类器输入到第二个注意力层,同样利用神经网络为弱分类器打分,从而实现对弱分类器权重的动态调整,并最终得到一组最优权重。通过在损失函数上引入关于样本权重的L2正则项,并对正则项系数进行调整来解决AdaBoost算法对多次分类错误样本设置过高权重的问题。通过在四个公开数据集上与文献中提出的其它改进AdaBoost算法进行比较,发现其在准确率上优于其它改进算法。
The original AdaBoost algorithm cannot dynamically adjust the weights of the weak classifiers and sets too high weights for samples with multiple misclassifications.In order to tackle the above problems,a new algorithm is proposed by introducing the Attention mechanism into the design of AdaBoost.First,all samples are put into the first Attention layer,and each sample is scored with a neural network.Then all weak classifiers are put into the second Attention layer,and the weak classifiers are also scored with another neural network.In this way,the weights of the controller can be adjusted dynamically and a set of optimal weights can be finally obtained.Moreover,an L2 regularization term of sample weights is introduced into the loss function to avoid excessively high weights for samples that are multi-misclassified.The proposed algorithm exhibited superiority over other improved AdaBoost algorithms in accuracy when performing simulations on four public datasets.
作者
缪泽鑫
张会生
任磊
MIAO Ze-xin;ZHANG Hui-sheng;REN Lei(College of Information Science and Technology,Dalian Liaoning 116026,China;College of Science,Dalian Maritime University,Dalian Liaoning 116026,China)
出处
《计算机仿真》
北大核心
2022年第7期337-341,共5页
Computer Simulation
基金
国家自然科学基金(61671099)。
关键词
算法
注意力机制
神经网络
Algorithm
Attention mechanism
Neural network