摘要
在剔除影响单隐层前馈神经网络性能的"脏数据"后,传统的极限学习机算法需要重新训练整个网络,这会增加很多额外的训练时间。针对这一问题,在传统的极限学习机算法的基础上,提出一种在线负增量学习算法:剔除"脏训练样本"后,不需要再重新训练整个网络,而只需在原有的基础上,通过更新外权矩阵来完成网络更新。算法复杂性分析和仿真实验的结果表明所提出的算法具有更高的执行速度。
After weeding out the dirty data that affecting the performance of single hidden layer feedforward network, traditional extreme learning machine has the need to train the entire networks. However, this will increase a lot of extra training time. In light of this issue, the paper proposes an online negative incremental algorithm based on traditional extreme learning machine algorithm:after the "dirty training sample" being eliminated, there has no need to train the whole networks once again, but only need to accomplish the network update by updating output weights matrix on the basis of original. The complexity analysis of the algorithm and the result of simulation experiment show that the proposed algorithm has higher execution speed.
出处
《计算机应用与软件》
CSCD
2016年第9期269-272,共4页
Computer Applications and Software
基金
国家自然科学基金面上项目(11171137)
浙江省自然科学基金项目(LY13A010008)
关键词
极限学习机
负增量算法
算法复杂性
仿真实验
Extreme learning machine Negative incremental algorithm Algorithm' s complexity Simulation experiment