摘要
A learning algorithm is presented for the learning of neural networks, in which the learning trajectory is convergence without any over-learning by changing of topological construction of the algorithm near any local minimum points of learning error. Because the topological construction is not convergent for some functions by usual BP method near some local minimum points, there is an over-learning phenomenon. To avoid the over-learning phenomenon, reference-following variables are used to change the topological construction of this algorithm. The theoretical analysis and the simulation results indicate that the proposed method is simple and useful.
A learning algorithm is presented for the learning of neural networks, in which the learning trajectory is convergence without any over-learning by changing of topological construction of the algorithm near any local minimum points of learning error. Because the topological construction is not convergent for some functions by usual BP method near some local minimum points, there is an over-learning phenomenon. To avoid the over-learning phenomenon, reference-following variables are used to change the topological construction of this algorithm. The theoretical analysis and the simulation results indicate that the proposed method is simple and useful.
基金
This work was supported by Education Foundation of Shanghai (No.03AK014).