Some sufficient conditions for the global exponential stability and lower bounds on the rate of exponential convergence of the cellular neural networks with delay (DCNNs) are obtained by means of a method based on del...Some sufficient conditions for the global exponential stability and lower bounds on the rate of exponential convergence of the cellular neural networks with delay (DCNNs) are obtained by means of a method based on delay differential inequality. The method, which does not make use of any Lyapunov functional, is simple and valid for the stability analysis of neural networks with delay. Some previously established results in this paper are shown to be special casses of the presented result.展开更多
Globally exponential stability (which implies convergence and uniqueness) of their classical iterative algorithm is established using methods of heat equations and energy integral after embedding the discrete iterat...Globally exponential stability (which implies convergence and uniqueness) of their classical iterative algorithm is established using methods of heat equations and energy integral after embedding the discrete iteration into a continuous flow. The stability condition depends explicitly on smoothness of the image sequence, size of image domain, value of the regularization parameter, and finally discretization step. Specifically, as the discretization step approaches to zero, stability holds unconditionally. The analysis also clarifies relations among the iterative algorithm, the original variation formulation and the PDE system. The proper regularity of solution and natural images is briefly surveyed and discussed. Experimental results validate the theoretical claims both on convergence and exponential stability.展开更多
In this paper we propose a new family of curve search methods for unconstrained optimization problems, which are based on searching a new iterate along a curve through the current iterate at each iteration, while line...In this paper we propose a new family of curve search methods for unconstrained optimization problems, which are based on searching a new iterate along a curve through the current iterate at each iteration, while line search methods are based on finding a new iterate on a line starting from the current iterate at each iteration. The global convergence and linear convergence rate of these curve search methods are investigated under some mild conditions. Numerical results show that some curve search methods are stable and effective in solving some large scale minimization problems.展开更多
In this paper, we propose several new line search rules for solving unconstrained minimization problems. These new line search rules can extend the accepted scope of step sizes to a wider extent than the corresponding...In this paper, we propose several new line search rules for solving unconstrained minimization problems. These new line search rules can extend the accepted scope of step sizes to a wider extent than the corresponding original ones and give an adequate initial step size at each iteration. It is proved that the resulting line search algorithms have global convergence under some mild conditions. It is also proved that the search direction plays an important role in line search methods and that the step size approaches mainly guarantee global convergence in general cases. The convergence rate of these methods is also investigated. Some numerical results show that these new line search algorithms are effective in practical computation.展开更多
Consider the nonparametric regression model Y=go(T)+u, where Y is real-valued, u is a random error, T is a random d-vector of explanatory variables ranging over a nondegenerate d-dimensional compact set C, and go(...Consider the nonparametric regression model Y=go(T)+u, where Y is real-valued, u is a random error, T is a random d-vector of explanatory variables ranging over a nondegenerate d-dimensional compact set C, and go(·) is the unknown smooth regression function, which is m (0) times continuously differentiable and its mth partial derivatives satisfy the Hǒlder condition with exponent γ∈(0,1], where i1, . . . , id are nonnegative integers satisfying ik=m. The piecewise polynomial estimator of go based on M-estimates is considered. It is proved that the rate of convergence of the underlying estimator is Op () under certain regular conditions, which is the optimal global rate of convergence of least square estimates for nonparametric regression studied in [10-11] .展开更多
This paper presents a variant algorithm of Goldfarb’s method for linearlyconstrained optimization problems. In the variant algorithm, we introduce a concept calledconjugate projection, which differs from orthogonal p...This paper presents a variant algorithm of Goldfarb’s method for linearlyconstrained optimization problems. In the variant algorithm, we introduce a concept calledconjugate projection, which differs from orthogonal projection. The variant algorithm hasglobal convergence, superlinear convergence rate.展开更多
文摘Some sufficient conditions for the global exponential stability and lower bounds on the rate of exponential convergence of the cellular neural networks with delay (DCNNs) are obtained by means of a method based on delay differential inequality. The method, which does not make use of any Lyapunov functional, is simple and valid for the stability analysis of neural networks with delay. Some previously established results in this paper are shown to be special casses of the presented result.
基金Foundation item: Projects(60835005, 90820302) supported by the National Natural Science Foundation of China Project(2007CB311001) supported by the National Basic Research Program of China
文摘Globally exponential stability (which implies convergence and uniqueness) of their classical iterative algorithm is established using methods of heat equations and energy integral after embedding the discrete iteration into a continuous flow. The stability condition depends explicitly on smoothness of the image sequence, size of image domain, value of the regularization parameter, and finally discretization step. Specifically, as the discretization step approaches to zero, stability holds unconditionally. The analysis also clarifies relations among the iterative algorithm, the original variation formulation and the PDE system. The proper regularity of solution and natural images is briefly surveyed and discussed. Experimental results validate the theoretical claims both on convergence and exponential stability.
文摘In this paper we propose a new family of curve search methods for unconstrained optimization problems, which are based on searching a new iterate along a curve through the current iterate at each iteration, while line search methods are based on finding a new iterate on a line starting from the current iterate at each iteration. The global convergence and linear convergence rate of these curve search methods are investigated under some mild conditions. Numerical results show that some curve search methods are stable and effective in solving some large scale minimization problems.
文摘In this paper, we propose several new line search rules for solving unconstrained minimization problems. These new line search rules can extend the accepted scope of step sizes to a wider extent than the corresponding original ones and give an adequate initial step size at each iteration. It is proved that the resulting line search algorithms have global convergence under some mild conditions. It is also proved that the search direction plays an important role in line search methods and that the step size approaches mainly guarantee global convergence in general cases. The convergence rate of these methods is also investigated. Some numerical results show that these new line search algorithms are effective in practical computation.
文摘Consider the nonparametric regression model Y=go(T)+u, where Y is real-valued, u is a random error, T is a random d-vector of explanatory variables ranging over a nondegenerate d-dimensional compact set C, and go(·) is the unknown smooth regression function, which is m (0) times continuously differentiable and its mth partial derivatives satisfy the Hǒlder condition with exponent γ∈(0,1], where i1, . . . , id are nonnegative integers satisfying ik=m. The piecewise polynomial estimator of go based on M-estimates is considered. It is proved that the rate of convergence of the underlying estimator is Op () under certain regular conditions, which is the optimal global rate of convergence of least square estimates for nonparametric regression studied in [10-11] .
文摘This paper presents a variant algorithm of Goldfarb’s method for linearlyconstrained optimization problems. In the variant algorithm, we introduce a concept calledconjugate projection, which differs from orthogonal projection. The variant algorithm hasglobal convergence, superlinear convergence rate.