摘要
由于误差函数的高维复杂性 ,BP网络在目前的应用中存在训练速度慢甚至导致网络系统瘫痪的问题 ,针对训练中的归一化问题、隐层节点数的选取、样本数目的增减法、整体学习率的确定及训练算法进行了研究 .结果表明 :训练样本数据不必一定归一化到 [0 ,1],可以通过简单的线性变换将数据转化到某个区间 ,使数据分布合理 ,以满足训练需要 ;根据经验公式 ,确定隐层节点数的初值 ,选取规模略大一点的网络开始训练 ;网络应对学习过的样本加强记忆 ,更应注意剔除错误样本 ;引入黄金分割法的思想来调整学习率的步长 ,效果理想 ;采用单参数动态搜索算法作为学习算法 ,能尽快达到训练要求的精度 .
In order to address such issues as slow convergence speed and paralysis of network in application of BP network due to high-dimension complexity of error function, the normalization, number of hidden layer nodes, addition or subtraction of sample numbers, global learning rate, and network training algorithm are studied in this paper. The results show that: Training sample data may not necessarily be normalized to [0,1] and they can be changed by linear transform to a certain interval to achieve rational data distribution and satisfy different training needs. The number of hidden layer nodes is initialized to the experiential formula, and a little bigger network may be chosen to start the network training. The network must strengthen the memory of learned samples and pay more attention to cutting down the number of wrong samples. The thought of golden means is introduced to adjust the learning rate with satisfactory results. As a learning algorithm, single parameter dynamic searching algorithm can meet the demands of a given precision as soon as possible.
出处
《哈尔滨工业大学学报》
EI
CAS
CSCD
北大核心
2001年第4期439-441,共3页
Journal of Harbin Institute of Technology
基金
国家自然科学基金资助项目 ( 6 99740 13)
黑龙江省科学技术计划项目 (k9912 )