摘要
前馈神经网络采用有教师的学习方法 根据实际输出与希望输出之间差异的函数 (即误差函数 )修改网络权值和阈值 ,不断反复训练使误差函数达到最小 讨论了误差函数的结构形式问题 ,给出了误差函数在条件对数均值处取得极小值的充分必要条件 该条件实际上给出了误差函数的结构形式 进一步的分析表明 ,误差函数结构形式推广了已有的结果 ,同时 ,获得的结构形式具有一定的抗干扰能力 之后进一步讨论了误差函数在第 1次α分位点处取得极小值的结构形式 这个结论具有更广泛的意义
Feedforward neural networks (FF networks) are the most popular and most widely used models in many practical applications The network is divided into layers Given a network, it can be trained using a set of data containing input output pairs With this data a neural network is usually trained by minimizing a given error function, measuring the discrepancy between the neural network output and the desired output In this paper, the structure modality of the error function is discussed and the necessary and sufficient condition about the error function is derived, which ensures that the output of the trained neural network approximates the conditional logarithm expectation of the desired output with training patterns Further analysis shows that the structure modality of the error function existing already is only a kind of special situations obtained in this paper Besides, the structure modality of the error function can overcome effectively the weakness of not being strong for anti jamming ability of the error function existing already A condition for approximating the first quantile of order α so is discussed to minimize the error function This conclusion has more extensive meaning The results offer a good foundation for further research on the artificial neural network
出处
《计算机研究与发展》
EI
CSCD
北大核心
2003年第7期913-917,共5页
Journal of Computer Research and Development
基金
国家自然科学基金 (F60 0 85 0 0 2 )
湖北省教育厅重点基金 ( 2 0 0 0A0 10 2 0 )
湖北大学重点基金 (A0 0 0 1)