Some papers on stochastic adaptive control schemes have established convergence algorithm using a least-squares parameters. With the popular application of GPC, global convergence has become a key problem in automatic...Some papers on stochastic adaptive control schemes have established convergence algorithm using a least-squares parameters. With the popular application of GPC, global convergence has become a key problem in automatic control theory. However, now global convergence of GPC has not been established for algorithms in computing a least squares iteration. A generalized model of adaptive generalized predictive control is presented. The global convergebce is also given on the basis of estimating the parameters of GPC by least squares algorithm.展开更多
Studies on the stability of the equilibrium points of continuous bidirectional associative memory (BAM) neural network have yielded many useful results. A novel neural network model called standard neural network mode...Studies on the stability of the equilibrium points of continuous bidirectional associative memory (BAM) neural network have yielded many useful results. A novel neural network model called standard neural network model (SNNM) is ad- vanced. By using state affine transformation, the BAM neural networks were converted to SNNMs. Some sufficient conditions for the global asymptotic stability of continuous BAM neural networks were derived from studies on the SNNMs’ stability. These conditions were formulated as easily verifiable linear matrix inequalities (LMIs), whose conservativeness is relatively low. The approach proposed extends the known stability results, and can also be applied to other forms of recurrent neural networks (RNNs).展开更多
The recurrent neural network (RNN) model based on projective operator was studied. Different from the former study, the value region of projective operator in the neural network in this paper is a general closed con...The recurrent neural network (RNN) model based on projective operator was studied. Different from the former study, the value region of projective operator in the neural network in this paper is a general closed convex subset of n-dimensional Euclidean space and it is not a compact convex set in general, that is, the value region of projective operator is probably unbounded. It was proved that the network has a global solution and its solution trajectory converges to some equilibrium set whenever objective function satisfies some conditions. After that, the model was applied to continuously differentiable optimization and nonlinear or implicit complementarity problems. In addition, simulation experiments confirm the efficiency of the RNN.展开更多
In this article, a new descent memory gradient method without restarts is proposed for solving large scale unconstrained optimization problems. The method has the following attractive properties: 1) The search direc...In this article, a new descent memory gradient method without restarts is proposed for solving large scale unconstrained optimization problems. The method has the following attractive properties: 1) The search direction is always a sufficiently descent direction at every iteration without the line search used; 2) The search direction always satisfies the angle property, which is independent of the convexity of the objective function. Under mild conditions, the authors prove that the proposed method has global convergence, and its convergence rate is also investigated. The numerical results show that the new descent memory method is efficient for the given test problems.展开更多
In this paper, a modified limited memory BFGS method for solving large-scale unconstrained optimization problems is proposed. A remarkable feature of the proposed method is that it possesses global convergence propert...In this paper, a modified limited memory BFGS method for solving large-scale unconstrained optimization problems is proposed. A remarkable feature of the proposed method is that it possesses global convergence property without convexity assumption on the objective function. Under some suitable conditions, the global convergence of the proposed method is proved. Some numerical results are reported which illustrate that the proposed method is efficient.展开更多
基金This project was supported by the National Natural Science Foundation of China (60174021) Tianjin Advanced School Science and Technology Development Foundation (01 - 20403) .
文摘Some papers on stochastic adaptive control schemes have established convergence algorithm using a least-squares parameters. With the popular application of GPC, global convergence has become a key problem in automatic control theory. However, now global convergence of GPC has not been established for algorithms in computing a least squares iteration. A generalized model of adaptive generalized predictive control is presented. The global convergebce is also given on the basis of estimating the parameters of GPC by least squares algorithm.
基金Project (No. 60074008) supported by the National Natural Science Foundation of China
文摘Studies on the stability of the equilibrium points of continuous bidirectional associative memory (BAM) neural network have yielded many useful results. A novel neural network model called standard neural network model (SNNM) is ad- vanced. By using state affine transformation, the BAM neural networks were converted to SNNMs. Some sufficient conditions for the global asymptotic stability of continuous BAM neural networks were derived from studies on the SNNMs’ stability. These conditions were formulated as easily verifiable linear matrix inequalities (LMIs), whose conservativeness is relatively low. The approach proposed extends the known stability results, and can also be applied to other forms of recurrent neural networks (RNNs).
文摘The recurrent neural network (RNN) model based on projective operator was studied. Different from the former study, the value region of projective operator in the neural network in this paper is a general closed convex subset of n-dimensional Euclidean space and it is not a compact convex set in general, that is, the value region of projective operator is probably unbounded. It was proved that the network has a global solution and its solution trajectory converges to some equilibrium set whenever objective function satisfies some conditions. After that, the model was applied to continuously differentiable optimization and nonlinear or implicit complementarity problems. In addition, simulation experiments confirm the efficiency of the RNN.
基金supported by the National Science Foundation of China under Grant No.70971076the Foundation of Shandong Provincial Education Department under Grant No.J10LA59
文摘In this article, a new descent memory gradient method without restarts is proposed for solving large scale unconstrained optimization problems. The method has the following attractive properties: 1) The search direction is always a sufficiently descent direction at every iteration without the line search used; 2) The search direction always satisfies the angle property, which is independent of the convexity of the objective function. Under mild conditions, the authors prove that the proposed method has global convergence, and its convergence rate is also investigated. The numerical results show that the new descent memory method is efficient for the given test problems.
基金Supported by National Natural Science Foundation of China(Grant11001075,11161003)Post-doctoral Foundation of China grant 20090461094the Natural Science Foundation of Henan Province Eduction Department grant 2010B110004
文摘In this paper, a modified limited memory BFGS method for solving large-scale unconstrained optimization problems is proposed. A remarkable feature of the proposed method is that it possesses global convergence property without convexity assumption on the objective function. Under some suitable conditions, the global convergence of the proposed method is proved. Some numerical results are reported which illustrate that the proposed method is efficient.