摘要
在遗传算法 (GA)的基础上引入了梯度算法 ,用它在内层无互联的前向神经网络中代替传统算法来学习和优化权值 ,并对算法的几个主要模块进行了描述 .利用GA的突变性和全局最优性搜索可能的极值 ,用自适应代沟替代策略更好地进行优胜劣汰 ,利用梯度下降算法在较优极值点附近快速收敛 .实验表明 ,这种算法的收敛速度比基本遗传算法要快得多 。
An ameliorated GA that imports gradient descent methods is used to learn the training set and adjust the weights in Feedforward networks. Several important modules of the algorithm are described. Using the mutation and global optimize, the algorithm can find potential extremum; using gradient descent methods, it can quickly converge at these points. As simulation shows, the new algorithm has a much higher converging speed than artless GA and an evidently improved learning quality than traditional algorithm as well.
出处
《西南师范大学学报(自然科学版)》
CAS
CSCD
北大核心
2002年第1期35-38,共4页
Journal of Southwest China Normal University(Natural Science Edition)