摘要
1 引言近年来,神经网络的研究取得了很大进展,特别是,为了克服传统的BP学习算法的缺陷,即学习速度慢和人为给定的拓扑结构对特定学习任务的不适应性,而发展的自适应神经网络的增长策略,它通过不断地增长隐节点或子网来满足给定学习任务的复杂性要求。这种神经网络的增长算法不仅克服了人为指定的拓扑结构的困难,而且由于其结构过程所固有的模块化训练特性,也缓解了传统的BP算法训练速度慢的突出问题。由于神经网络训练程度很难把握,许多算法往往过分强调训练结果而牺牲泛化结果,致使网络的过拟合问题严重。为了克服过拟合问题,研究者们采用了多网络合作模型,由于多个网络的平均效应。
; Most neural networks perform a passive learning procedure today, this means that neural networks have to receive and learn all candidate exemplars, in the case of greatly redundant exemplars, the training processes of neural networks are very time-consuming. An evolutionary active learning algorithm is presented for providing a solution to the problem mentioned above in the paper, it allows neural networks to evolutionarily select each time a subset of representative exemplars from which useful information is extracted and thus a useful piece of knowledge is accumulated until some needs are met. Unlike previous active learning strategies, its learning is focused on a subset of interesting exemplars (not an individual exemplar). Its distinct strength lies in using collective effect of a subset of exemplars resulted from evolutionary scheme, thus has some superiority over other algorithms. Simulation results for two tests indicate that our method can actively learn a concise set of exemplars representative of all available examples, in fact, this is performing a kind of data compression, so training is greatly of speedup.
出处
《计算机科学》
CSCD
北大核心
2002年第10期61-63,共3页
Computer Science
基金
国家自然科学基金