This paper proposes the compensating methods for feedforward neural networks (FNNs) which are very difficult to train by traditional Back Propagation (BP) methods. For an FNN trapped in local minima the compensating m...This paper proposes the compensating methods for feedforward neural networks (FNNs) which are very difficult to train by traditional Back Propagation (BP) methods. For an FNN trapped in local minima the compensating methods can correct the wrong outputs one by one until all outputs are right, then the network is located at a global optimum point. A hidden neuron is added to compensate for a binary input three-layer FNN trapped in a local minimum, and one or two hidden neurons are added to compensate for a real input three-layer FNN. For a more than three layers FNN, the second hidden layer from behind will be temporarily treated as the input layer during compensation, hence the above methods can also be used.展开更多
基金This work is supported by Laboratory of Management,Decision and Information Systems,Academia Sinica.
文摘This paper proposes the compensating methods for feedforward neural networks (FNNs) which are very difficult to train by traditional Back Propagation (BP) methods. For an FNN trapped in local minima the compensating methods can correct the wrong outputs one by one until all outputs are right, then the network is located at a global optimum point. A hidden neuron is added to compensate for a binary input three-layer FNN trapped in a local minimum, and one or two hidden neurons are added to compensate for a real input three-layer FNN. For a more than three layers FNN, the second hidden layer from behind will be temporarily treated as the input layer during compensation, hence the above methods can also be used.