摘要
分析了作者提出的双并联前向神经网络(DPFNN)有限精度实现的误差界,并给出多种有限精度运算实现的仿真结果。
A looser statistical model is used to analyze the errors of neural networks when limited precision operations are used to design and implement a realvalued double parallel feedforward neural networks(DPFNN). The analysis is based on new assumptions on both the expectation bound and variance bound of both inputs and weights, instead of the usual assumptions on the distributions of the neural inputs, weights, states, activational functions etc. The new assumptions are obviously much less restrictive and closer to practical situations. We develop, as functions of the variance parameters and expectation or mean parameters, general statistical formulations of the performance degradation of the neural network caused by errors in inputs and weights. The study shows that the networks performance degradation gets worse when either variance bounds or mean bounds of the errors in either the inputs or weights are increased; that the error bound of a DPFNN is smaller than that of a MLFNN (multilayer feedforward neural network); and that both the error bounds of a DPFNN and a MLFNN are increased as the wordlength is decreased. Furthermore, the theoretical analysis and simulations also show that when more than 12 bit wordlength is used, the effect of quantization on either DPFNN or MLFNN is very small.
出处
《西北工业大学学报》
EI
CAS
CSCD
北大核心
1997年第1期125-130,共6页
Journal of Northwestern Polytechnical University
基金
国家攀登计划神经网络重大关键项目
国家自然科学基金
关键词
神经网络
有限精度实现
误差分析
DPFNN
double parallel feedforward neural network, limited precision operation, error analysis