Online gradient method has been widely used as a learning algorithm for training feedforward neural networks. Penalty is often introduced into the training procedure to improve the generalization performance and to de...Online gradient method has been widely used as a learning algorithm for training feedforward neural networks. Penalty is often introduced into the training procedure to improve the generalization performance and to decrease the magnitude of network weights. In this paper, some weight boundedness and deterministic con- vergence theorems are proved for the online gradient method with penalty for BP neural network with a hidden layer, assuming that the training samples are supplied with the network in a fixed order within each epoch. The monotonicity of the error function with penalty is also guaranteed in the training iteration. Simulation results for a 3-bits parity problem are presented to support our theoretical results.展开更多
Online gradient algorithm has been widely used as a learning algorithm for feedforward neural network training. In this paper, we prove a weak convergence theorem of an online gradient algorithm with a penalty term, a...Online gradient algorithm has been widely used as a learning algorithm for feedforward neural network training. In this paper, we prove a weak convergence theorem of an online gradient algorithm with a penalty term, assuming that the training examples are input in a stochastic way. The monotonicity of the error function in the iteration and the boundedness of the weight are both guaranteed. We also present a numerical experiment to support our results.展开更多
In this paper, a new class of three term memory gradient method with non-monotone line search technique for unconstrained optimization is presented. Global convergence properties of the new methods are discussed. Comb...In this paper, a new class of three term memory gradient method with non-monotone line search technique for unconstrained optimization is presented. Global convergence properties of the new methods are discussed. Combining the quasi-Newton method with the new method, the former is modified to have global convergence property. Numerical results show that the new algorithm is efficient.展开更多
Online gradient methods are widely used for training the weight of neural networks and for other engineering computations. In certain cases, the resulting weight may become very large, causing difficulties in the impl...Online gradient methods are widely used for training the weight of neural networks and for other engineering computations. In certain cases, the resulting weight may become very large, causing difficulties in the implementation of the network by electronic circuits. In this paper we introduce a punishing term into the error function of the training procedure to prevent this situation. The corresponding convergence of the iterative training procedure and the boundedness of the weight sequence are proved. A supporting numerical example is also provided.展开更多
In this paper,a three-term derivative-free projection method is proposed for solving nonlinear monotone equations.Under someappropriate conditions,the global convergence and R-linear convergence rate of the proposed m...In this paper,a three-term derivative-free projection method is proposed for solving nonlinear monotone equations.Under someappropriate conditions,the global convergence and R-linear convergence rate of the proposed method are analyzed and proved.With no need of any derivative information,the proposed method is able to solve large-scale nonlinear monotone equations.Numerical comparisons show that the proposed method is effective.展开更多
In this paper,we present a family of gradient projection method with arbitrary initialpoint.The formula of search direction in the method is unitary.The convergent conditions ofthe method are given.When the initial po...In this paper,we present a family of gradient projection method with arbitrary initialpoint.The formula of search direction in the method is unitary.The convergent conditions ofthe method are given.When the initial point is feasible,the family of the method contains severalknown algorithms.When the initial point is infeasible,the method is exactly that given in[6].Finally,we give a new method which has global convergence property.展开更多
In this paper, we propose a spectral DY-type projection method for nonlinear mono- tone system of equations, which is a reasonable combination of DY conjugate gradient method, the spectral gradient method and the proj...In this paper, we propose a spectral DY-type projection method for nonlinear mono- tone system of equations, which is a reasonable combination of DY conjugate gradient method, the spectral gradient method and the projection technique. Without the differen- tiability assumption on the system of equations, we establish the global convergence of the proposed method, which does not rely on any merit function. Furthermore, this method is derivative-free and so is very suitable to solve large-scale nonlinear monotone systems. The preliminary numerical results show the feasibility and effectiveness of the proposed method.展开更多
对于无约束优化问题,提出了一类新的三项记忆梯度算法.这类算法是在参数满足某些假设的条件下,确定它的取值范围,从而保证三项记忆梯度方向是使目标函数充分下降的方向.在非单调步长搜索下讨论了算法的全局收敛性.为了得到具有更好...对于无约束优化问题,提出了一类新的三项记忆梯度算法.这类算法是在参数满足某些假设的条件下,确定它的取值范围,从而保证三项记忆梯度方向是使目标函数充分下降的方向.在非单调步长搜索下讨论了算法的全局收敛性.为了得到具有更好收敛性质的算法,结合Solodov and Svaiter(2000)中的部分技巧,提出了一种新的记忆梯度投影算法,并证明了该算法在函数伪凸的情况下具有整体收敛性.展开更多
基金The NSF (10871220) of Chinathe Doctoral Foundation (Y080820) of China University of Petroleum
文摘Online gradient method has been widely used as a learning algorithm for training feedforward neural networks. Penalty is often introduced into the training procedure to improve the generalization performance and to decrease the magnitude of network weights. In this paper, some weight boundedness and deterministic con- vergence theorems are proved for the online gradient method with penalty for BP neural network with a hidden layer, assuming that the training samples are supplied with the network in a fixed order within each epoch. The monotonicity of the error function with penalty is also guaranteed in the training iteration. Simulation results for a 3-bits parity problem are presented to support our theoretical results.
基金Partly supported by the National Natural Science Foundation of China,and the Basic Research Program of the Committee of ScienceTechnology and Industry of National Defense of China.
文摘Online gradient algorithm has been widely used as a learning algorithm for feedforward neural network training. In this paper, we prove a weak convergence theorem of an online gradient algorithm with a penalty term, assuming that the training examples are input in a stochastic way. The monotonicity of the error function in the iteration and the boundedness of the weight are both guaranteed. We also present a numerical experiment to support our results.
文摘In this paper, a new class of three term memory gradient method with non-monotone line search technique for unconstrained optimization is presented. Global convergence properties of the new methods are discussed. Combining the quasi-Newton method with the new method, the former is modified to have global convergence property. Numerical results show that the new algorithm is efficient.
文摘Online gradient methods are widely used for training the weight of neural networks and for other engineering computations. In certain cases, the resulting weight may become very large, causing difficulties in the implementation of the network by electronic circuits. In this paper we introduce a punishing term into the error function of the training procedure to prevent this situation. The corresponding convergence of the iterative training procedure and the boundedness of the weight sequence are proved. A supporting numerical example is also provided.
文摘In this paper,a three-term derivative-free projection method is proposed for solving nonlinear monotone equations.Under someappropriate conditions,the global convergence and R-linear convergence rate of the proposed method are analyzed and proved.With no need of any derivative information,the proposed method is able to solve large-scale nonlinear monotone equations.Numerical comparisons show that the proposed method is effective.
基金Supported by the National Natural Science Foundation of China(72071202)Postgraduate Research&Practice Innovation Program of Jiangsu Province(KYCX22_2491)+1 种基金Graduate Innovation Program of China University of Mining and Technology(2022WLKXJ021)Undergraduate Training Program for Innovation and Entrepreneurial,China University of Mining and Technology(202210290205Y)。
基金Project supported by the National Natural Science Foundation of China
文摘In this paper,we present a family of gradient projection method with arbitrary initialpoint.The formula of search direction in the method is unitary.The convergent conditions ofthe method are given.When the initial point is feasible,the family of the method contains severalknown algorithms.When the initial point is infeasible,the method is exactly that given in[6].Finally,we give a new method which has global convergence property.
文摘In this paper, we propose a spectral DY-type projection method for nonlinear mono- tone system of equations, which is a reasonable combination of DY conjugate gradient method, the spectral gradient method and the projection technique. Without the differen- tiability assumption on the system of equations, we establish the global convergence of the proposed method, which does not rely on any merit function. Furthermore, this method is derivative-free and so is very suitable to solve large-scale nonlinear monotone systems. The preliminary numerical results show the feasibility and effectiveness of the proposed method.
基金This work is supported by National Natural Science Foundation under Grant No.10571106.
文摘对于无约束优化问题,提出了一类新的三项记忆梯度算法.这类算法是在参数满足某些假设的条件下,确定它的取值范围,从而保证三项记忆梯度方向是使目标函数充分下降的方向.在非单调步长搜索下讨论了算法的全局收敛性.为了得到具有更好收敛性质的算法,结合Solodov and Svaiter(2000)中的部分技巧,提出了一种新的记忆梯度投影算法,并证明了该算法在函数伪凸的情况下具有整体收敛性.