Online gradient method has been widely used as a learning algorithm for training feedforward neural networks. Penalty is often introduced into the training procedure to improve the generalization performance and to de...Online gradient method has been widely used as a learning algorithm for training feedforward neural networks. Penalty is often introduced into the training procedure to improve the generalization performance and to decrease the magnitude of network weights. In this paper, some weight boundedness and deterministic con- vergence theorems are proved for the online gradient method with penalty for BP neural network with a hidden layer, assuming that the training samples are supplied with the network in a fixed order within each epoch. The monotonicity of the error function with penalty is also guaranteed in the training iteration. Simulation results for a 3-bits parity problem are presented to support our theoretical results.展开更多
Online gradient methods are widely used for training the weight of neural networks and for other engineering computations. In certain cases, the resulting weight may become very large, causing difficulties in the impl...Online gradient methods are widely used for training the weight of neural networks and for other engineering computations. In certain cases, the resulting weight may become very large, causing difficulties in the implementation of the network by electronic circuits. In this paper we introduce a punishing term into the error function of the training procedure to prevent this situation. The corresponding convergence of the iterative training procedure and the boundedness of the weight sequence are proved. A supporting numerical example is also provided.展开更多
基金The NSF (10871220) of Chinathe Doctoral Foundation (Y080820) of China University of Petroleum
文摘Online gradient method has been widely used as a learning algorithm for training feedforward neural networks. Penalty is often introduced into the training procedure to improve the generalization performance and to decrease the magnitude of network weights. In this paper, some weight boundedness and deterministic con- vergence theorems are proved for the online gradient method with penalty for BP neural network with a hidden layer, assuming that the training samples are supplied with the network in a fixed order within each epoch. The monotonicity of the error function with penalty is also guaranteed in the training iteration. Simulation results for a 3-bits parity problem are presented to support our theoretical results.
文摘Online gradient methods are widely used for training the weight of neural networks and for other engineering computations. In certain cases, the resulting weight may become very large, causing difficulties in the implementation of the network by electronic circuits. In this paper we introduce a punishing term into the error function of the training procedure to prevent this situation. The corresponding convergence of the iterative training procedure and the boundedness of the weight sequence are proved. A supporting numerical example is also provided.