Online gradient algorithm has been widely used as a learning algorithm for feedforward neural network training. In this paper, we prove a weak convergence theorem of an online gradient algorithm with a penalty term, a...Online gradient algorithm has been widely used as a learning algorithm for feedforward neural network training. In this paper, we prove a weak convergence theorem of an online gradient algorithm with a penalty term, assuming that the training examples are input in a stochastic way. The monotonicity of the error function in the iteration and the boundedness of the weight are both guaranteed. We also present a numerical experiment to support our results.展开更多
A discussion is given on the convergence of the on-line gradient methods for two-layer feedforward neural networks in general cases. The theories are applied to some usual activation functions and energy functions.
This paper is devoted to a class of inverse coefficient problems for nonlinear elliptic hemivariational inequalities. The unknown coefficient of elliptic hemivariational inequalities depends on the gradient of the sol...This paper is devoted to a class of inverse coefficient problems for nonlinear elliptic hemivariational inequalities. The unknown coefficient of elliptic hemivariational inequalities depends on the gradient of the solution and belongs to a set of admissible coefficients. It is shown that the nonlinear elliptic hemivariational inequalities are uniquely solvable for the given class of coefficients. The result of existence of quasisolutions of the inverse problems is obtained.展开更多
基金Partly supported by the National Natural Science Foundation of China,and the Basic Research Program of the Committee of ScienceTechnology and Industry of National Defense of China.
文摘Online gradient algorithm has been widely used as a learning algorithm for feedforward neural network training. In this paper, we prove a weak convergence theorem of an online gradient algorithm with a penalty term, assuming that the training examples are input in a stochastic way. The monotonicity of the error function in the iteration and the boundedness of the weight are both guaranteed. We also present a numerical experiment to support our results.
基金Supported by the Natural Science Foundation of China
文摘A discussion is given on the convergence of the on-line gradient methods for two-layer feedforward neural networks in general cases. The theories are applied to some usual activation functions and energy functions.
基金supported by the National Natural Science Foundation of China(No.10971019)the GuangxiProvincial Natural Science Foundation of China(No.2010GXNSFA013114)
文摘This paper is devoted to a class of inverse coefficient problems for nonlinear elliptic hemivariational inequalities. The unknown coefficient of elliptic hemivariational inequalities depends on the gradient of the solution and belongs to a set of admissible coefficients. It is shown that the nonlinear elliptic hemivariational inequalities are uniquely solvable for the given class of coefficients. The result of existence of quasisolutions of the inverse problems is obtained.