摘要
In this article, we propose sharpening the gain of the chaotic annealing neural network to solve 0 1 constrained optimization problem. During the chaotic annealing, the gain of the neurons gradually increases and finally arrives at a large value. This strategy can accelerate the convergence of the network to the binary state and keep the satisfaction of the constrains. The simulations, which take the knapsack problems as examples, demonstrate that the approach is efficient both in approximating the global solution and the number of iterations.
In this article, we propose sharpening the gain of the chaotic annealing neural network to solve 0 1 constrained optimization problem. During the chaotic annealing, the gain of the neurons gradually increases and finally arrives at a large value. This strategy can accelerate the convergence of the network to the binary state and keep the satisfaction of the constrains. The simulations, which take the knapsack problems as examples, demonstrate that the approach is efficient both in approximating the global solution and the number of iterations.
基金
ThisworkissupportedbytheNationalNaturalScienceFoundationofChina(NSFC)No.6970 2 0 0 0 8.