Proximal gradient descent and its accelerated version are resultful methods for solving the sum of smooth and non-smooth problems. When the smooth function can be represented as a sum of multiple functions, the stocha...Proximal gradient descent and its accelerated version are resultful methods for solving the sum of smooth and non-smooth problems. When the smooth function can be represented as a sum of multiple functions, the stochastic proximal gradient method performs well. However, research on its accelerated version remains unclear. This paper proposes a proximal stochastic accelerated gradient (PSAG) method to address problems involving a combination of smooth and non-smooth components, where the smooth part corresponds to the average of multiple block sums. Simultaneously, most of convergence analyses hold in expectation. To this end, under some mind conditions, we present an almost sure convergence of unbiased gradient estimation in the non-smooth setting. Moreover, we establish that the minimum of the squared gradient mapping norm arbitrarily converges to zero with probability one.展开更多
In this paper, the optimal control problem of parabolic integro-differential equations is solved by gradient recovery based two-grid finite element method. Piecewise linear functions are used to approximate state and ...In this paper, the optimal control problem of parabolic integro-differential equations is solved by gradient recovery based two-grid finite element method. Piecewise linear functions are used to approximate state and co-state variables, and piecewise constant function is used to approximate control variables. Generally, the optimal conditions for the problem are solved iteratively until the control variable reaches error tolerance. In order to calculate all the variables individually and parallelly, we introduce a gradient recovery based two-grid method. First, we solve the small scaled optimal control problem on coarse grids. Next, we use the gradient recovery technique to recover the gradients of state and co-state variables. Finally, using the recovered variables, we solve the large scaled optimal control problem for all variables independently. Moreover, we estimate priori error for the proposed scheme, and use an example to validate the theoretical results.展开更多
To preserve the edges and details of the image,a new variational model for wavelet domain inpainting was proposed which contained a non-convex regularizer. The non-convex regularizer can utilize the local information ...To preserve the edges and details of the image,a new variational model for wavelet domain inpainting was proposed which contained a non-convex regularizer. The non-convex regularizer can utilize the local information of image and perform better than those usual convex ones. In addition, to solve the non-convex minimization problem,an iterative reweighted method and a primaldual method were designed. The numerical experiments show that the new model not only gets better visual effects but also obtains higher signal to noise ratio than the recent method.展开更多
The conventional gravity gradient method to plot the geologic body location is fuzzy. When the depth is large and the geologic body is small, the Vzz and Vzx derivative errors are also large. We describe that using th...The conventional gravity gradient method to plot the geologic body location is fuzzy. When the depth is large and the geologic body is small, the Vzz and Vzx derivative errors are also large. We describe that using the status distinguishing factor to optimally determine the comer location is more accurate than the conventional higher-order derivative method. Thus, a better small geologic body and fault resolution is obtained by using the gravity gradient method and trial theoretical model calculation. The actual data is better processed, providing a better basis for prospecting and determination of subsurface geologic structure.展开更多
In this paper, we present a new hybrid conjugate gradient algorithm for unconstrained optimization. This method is a convex combination of Liu-Storey conjugate gradient method and Fletcher-Reeves conjugate gradient me...In this paper, we present a new hybrid conjugate gradient algorithm for unconstrained optimization. This method is a convex combination of Liu-Storey conjugate gradient method and Fletcher-Reeves conjugate gradient method. We also prove that the search direction of any hybrid conjugate gradient method, which is a convex combination of two conjugate gradient methods, satisfies the famous D-L conjugacy condition and in the same time accords with the Newton direction with the suitable condition. Furthermore, this property doesn't depend on any line search. Next, we also prove that, moduling the value of the parameter t,the Newton direction condition is equivalent to Dai-Liao conjugacy condition.The strong Wolfe line search conditions are used.The global convergence of this new method is proved.Numerical comparisons show that the present hybrid conjugate gradient algorithm is the efficient one.展开更多
In this paper, a new class of three term memory gradient method with non-monotone line search technique for unconstrained optimization is presented. Global convergence properties of the new methods are discussed. Comb...In this paper, a new class of three term memory gradient method with non-monotone line search technique for unconstrained optimization is presented. Global convergence properties of the new methods are discussed. Combining the quasi-Newton method with the new method, the former is modified to have global convergence property. Numerical results show that the new algorithm is efficient.展开更多
In the present paper we present a class of polynomial primal-dual interior-point algorithms for semidefmite optimization based on a kernel function. This kernel function is not a so-called self-regular function due to...In the present paper we present a class of polynomial primal-dual interior-point algorithms for semidefmite optimization based on a kernel function. This kernel function is not a so-called self-regular function due to its growth term increasing linearly. Some new analysis tools were developed which can be used to deal with complexity "analysis of the algorithms which use analogous strategy in [5] to design the search directions for the Newton system. The complexity bounds for the algorithms with large- and small-update methodswere obtained, namely,O(qn^(p+q/q(P+1)log n/ε and O(q^2√n)log n/ε,respectlvely.展开更多
Online gradient method has been widely used as a learning algorithm for training feedforward neural networks. Penalty is often introduced into the training procedure to improve the generalization performance and to de...Online gradient method has been widely used as a learning algorithm for training feedforward neural networks. Penalty is often introduced into the training procedure to improve the generalization performance and to decrease the magnitude of network weights. In this paper, some weight boundedness and deterministic con- vergence theorems are proved for the online gradient method with penalty for BP neural network with a hidden layer, assuming that the training samples are supplied with the network in a fixed order within each epoch. The monotonicity of the error function with penalty is also guaranteed in the training iteration. Simulation results for a 3-bits parity problem are presented to support our theoretical results.展开更多
The distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of n local cost functions by using local information exchange is considered.This problem is an important component of...The distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of n local cost functions by using local information exchange is considered.This problem is an important component of many machine learning techniques with data parallelism,such as deep learning and federated learning.We propose a distributed primal-dual stochastic gradient descent(SGD)algorithm,suitable for arbitrarily connected communication networks and any smooth(possibly nonconvex)cost functions.We show that the proposed algorithm achieves the linear speedup convergence rate O(1/(√nT))for general nonconvex cost functions and the linear speedup convergence rate O(1/(nT)) when the global cost function satisfies the Polyak-Lojasiewicz(P-L)condition,where T is the total number of iterations.We also show that the output of the proposed algorithm with constant parameters linearly converges to a neighborhood of a global optimum.We demonstrate through numerical experiments the efficiency of our algorithm in comparison with the baseline centralized SGD and recently proposed distributed SGD algorithms.展开更多
This paper analyzes the characteristics of the output gradient histogram and shortages of several traditional automatic threshold methods in order to segment the gradient image better. Then an improved double-threshol...This paper analyzes the characteristics of the output gradient histogram and shortages of several traditional automatic threshold methods in order to segment the gradient image better. Then an improved double-threshold method is proposed, which is combined with the method of maximum classes variance, estimating-area method and double-threshold method. This method can automatically select two different thresholds to segment gradient images. The computer simulation is performed on the traditional methods and this algorithm and proves that this method can get satisfying result. Key words gradient histogram image - threshold selection - double-threshold method - maximum classes variance method CLC number TP 391. 41 Foundation item: Supported by the National Nature Science Foundation of China (50099620) and the Project of Chenguang Plan in Wuhan (985003062)Biography: YANG Shen (1977-), female, Ph. D. candidate, research direction: multimedia information processing and network technology.展开更多
Gradiently denitrated gun propellant(GDGP)prepared by a“gradient denitration”strategy is obviously superior in progressive burning performance to the traditional deterred gun propellant.Currently,the preparation of ...Gradiently denitrated gun propellant(GDGP)prepared by a“gradient denitration”strategy is obviously superior in progressive burning performance to the traditional deterred gun propellant.Currently,the preparation of GDGP employed a tedious two-step method involving organic solvents,which hinders the large-scale preparation of GDGP.In this paper,GDGP was successfully prepared via a novelty and environmentally friendly one-step method.The obtained samples were characterized by FT-IR,Raman,SEM and XPS.The results showed that the content of nitrate groups gradiently increased from the surface to the core in the surface layer of GDGP and the surface layer of GDGP exhibited a higher compaction than that of raw gun propellant,with a well-preserved nitrocellulose structure.The denitration process enabled the propellant surface with regressive energy density and good progressive burning performance,as confirmed by oxygen bomb and closed bomb test.At the same time,the effects of different solvents on the component loss of propellant were compared.The result showed that water caused the least component loss.Finally,the stability of GDGP was confirmed by methyl-violet test.This work not only provided environmentally friendly,simple and economic preparation of GDGP,but also confirmed the stability of GDGP prepared by this method.展开更多
The aim of the study was to prepare berberine hydrochloride long-circulating liposomes and optimize the formulation and process parameters,and investigate the influence of different factors on the encapsulation effici...The aim of the study was to prepare berberine hydrochloride long-circulating liposomes and optimize the formulation and process parameters,and investigate the influence of different factors on the encapsulation efficiency.Berberine hydrochloride liposomes were prepared in response to a transmembrane ion gradient that was established by ionophore A23187.Free and liposomal drug were separated by cation exchange resin,and then the amount of intraliposomal berberine hydrochloride was determined by UV spectrophotometry.The optimized encapsulation efficiency of berberine hydrochloride liposomes was 94.3%2.1%when the drug-to-lipid ratio was 1:20,and the mean diameter was 146.9 nm3.2 nm.As a result,the ionophore A23187-mediated ZnSO_(4)gradient method was suitable for the preparation of berberine hydrochloride liposomes that we could get the desired encapsulation efficiency and drug loading.展开更多
Fast solving large-scale linear equations in the finite element analysis is a classical subject in computational mechanics. It is a key technique in computer aided engineering (CAE) and computer aided manufacturing ...Fast solving large-scale linear equations in the finite element analysis is a classical subject in computational mechanics. It is a key technique in computer aided engineering (CAE) and computer aided manufacturing (CAM). This paper presents a high-efficiency improved symmetric successive over-relaxation (ISSOR) preconditioned conjugate gradient (PCG) method, which maintains lelism consistent with the original form. Ideally, the by 50% as compared with the original algorithm. the convergence and inherent paralcomputation can It is suitable for be reduced nearly high-performance computing with its inherent basic high-efficiency operations. By comparing with the numerical results, it is shown that the proposed method has the best performance.展开更多
Let C be a nonempty closed convex subset of a 2-uniformly convex and uniformly smooth Banach space E and {An}n∈N be a family of monotone and Lipschitz continuos mappings of C into E*. In this article, we consider th...Let C be a nonempty closed convex subset of a 2-uniformly convex and uniformly smooth Banach space E and {An}n∈N be a family of monotone and Lipschitz continuos mappings of C into E*. In this article, we consider the improved gradient method by the hybrid method in mathematical programming [i0] for solving the variational inequality problem for {AN} and prove strong convergence theorems. And we get several results which improve the well-known results in a real 2-uniformly convex and uniformly smooth Banach space and a real Hilbert space.展开更多
A hybridization of the three–term conjugate gradient method proposed by Zhang et al. and the nonlinear conjugate gradient method proposed by Polak and Ribi`ere, and Polyak is suggested. Based on an eigenvalue analysi...A hybridization of the three–term conjugate gradient method proposed by Zhang et al. and the nonlinear conjugate gradient method proposed by Polak and Ribi`ere, and Polyak is suggested. Based on an eigenvalue analysis, it is shown that search directions of the proposed method satisfy the sufficient descent condition, independent of the line search and the objective function convexity. Global convergence of the method is established under an Armijo–type line search condition. Numerical experiments show practical efficiency of the proposed method.展开更多
A simple and effective method for analyzing the stress distribution in a Functionally Gradient Material(FGM) layer on the su;face of a structural component is proposed in this paper. Generally, the FGM layer is very t...A simple and effective method for analyzing the stress distribution in a Functionally Gradient Material(FGM) layer on the su;face of a structural component is proposed in this paper. Generally, the FGM layer is very thin compared with the characteristic length of the structural component, and the nonhomogeneity exists only in the thin layer. Based on these features, by choosing a small parameter I which characterizes the stiffness of the layer relative to the component, and expanding the stresses and displacements on the two sides of the interface according to the parameter lambda, then asymptotically using the continuity conditions of the stresses and displacements on the interface, a decoupling computing process of the coupling control equations of the layer and the structural component is realized. Finally, two examples are given to illustrate the application of the method proposed.展开更多
Online gradient methods are widely used for training the weight of neural networks and for other engineering computations. In certain cases, the resulting weight may become very large, causing difficulties in the impl...Online gradient methods are widely used for training the weight of neural networks and for other engineering computations. In certain cases, the resulting weight may become very large, causing difficulties in the implementation of the network by electronic circuits. In this paper we introduce a punishing term into the error function of the training procedure to prevent this situation. The corresponding convergence of the iterative training procedure and the boundedness of the weight sequence are proved. A supporting numerical example is also provided.展开更多
In this paper, a gradient method with momentum for sigma-pi-sigma neural networks (SPSNN) is considered in order to accelerate the convergence of the learning procedure for the network weights. The momentum coefficien...In this paper, a gradient method with momentum for sigma-pi-sigma neural networks (SPSNN) is considered in order to accelerate the convergence of the learning procedure for the network weights. The momentum coefficient is chosen in an adaptive manner, and the corresponding weak convergence and strong convergence results are proved.展开更多
In this paper, a modified Polak-Ribière-Polyak conjugate gradient projection method is proposed for solving large scale nonlinear convex constrained monotone equations based on the projection method of Solodov an...In this paper, a modified Polak-Ribière-Polyak conjugate gradient projection method is proposed for solving large scale nonlinear convex constrained monotone equations based on the projection method of Solodov and Svaiter. The obtained method has low-complexity property and converges globally. Furthermore, this method has also been extended to solve the sparse signal reconstruction in compressive sensing. Numerical experiments illustrate the efficiency of the given method and show that such non-monotone method is suitable for some large scale problems.展开更多
文摘Proximal gradient descent and its accelerated version are resultful methods for solving the sum of smooth and non-smooth problems. When the smooth function can be represented as a sum of multiple functions, the stochastic proximal gradient method performs well. However, research on its accelerated version remains unclear. This paper proposes a proximal stochastic accelerated gradient (PSAG) method to address problems involving a combination of smooth and non-smooth components, where the smooth part corresponds to the average of multiple block sums. Simultaneously, most of convergence analyses hold in expectation. To this end, under some mind conditions, we present an almost sure convergence of unbiased gradient estimation in the non-smooth setting. Moreover, we establish that the minimum of the squared gradient mapping norm arbitrarily converges to zero with probability one.
文摘In this paper, the optimal control problem of parabolic integro-differential equations is solved by gradient recovery based two-grid finite element method. Piecewise linear functions are used to approximate state and co-state variables, and piecewise constant function is used to approximate control variables. Generally, the optimal conditions for the problem are solved iteratively until the control variable reaches error tolerance. In order to calculate all the variables individually and parallelly, we introduce a gradient recovery based two-grid method. First, we solve the small scaled optimal control problem on coarse grids. Next, we use the gradient recovery technique to recover the gradients of state and co-state variables. Finally, using the recovered variables, we solve the large scaled optimal control problem for all variables independently. Moreover, we estimate priori error for the proposed scheme, and use an example to validate the theoretical results.
基金National Natural Science Foundations of China(Nos.61301229,61101208)Doctoral Research Funds of Henan University of Science and Technology,China(Nos.09001708,09001751)
文摘To preserve the edges and details of the image,a new variational model for wavelet domain inpainting was proposed which contained a non-convex regularizer. The non-convex regularizer can utilize the local information of image and perform better than those usual convex ones. In addition, to solve the non-convex minimization problem,an iterative reweighted method and a primaldual method were designed. The numerical experiments show that the new model not only gets better visual effects but also obtains higher signal to noise ratio than the recent method.
基金support by the "Eleventh Five-Year" National Science and Technology Support Program (No. 2006BAB01A02)the Pivot Program of the National Natural Science Fund (No. 40930314)
文摘The conventional gravity gradient method to plot the geologic body location is fuzzy. When the depth is large and the geologic body is small, the Vzz and Vzx derivative errors are also large. We describe that using the status distinguishing factor to optimally determine the comer location is more accurate than the conventional higher-order derivative method. Thus, a better small geologic body and fault resolution is obtained by using the gravity gradient method and trial theoretical model calculation. The actual data is better processed, providing a better basis for prospecting and determination of subsurface geologic structure.
文摘In this paper, we present a new hybrid conjugate gradient algorithm for unconstrained optimization. This method is a convex combination of Liu-Storey conjugate gradient method and Fletcher-Reeves conjugate gradient method. We also prove that the search direction of any hybrid conjugate gradient method, which is a convex combination of two conjugate gradient methods, satisfies the famous D-L conjugacy condition and in the same time accords with the Newton direction with the suitable condition. Furthermore, this property doesn't depend on any line search. Next, we also prove that, moduling the value of the parameter t,the Newton direction condition is equivalent to Dai-Liao conjugacy condition.The strong Wolfe line search conditions are used.The global convergence of this new method is proved.Numerical comparisons show that the present hybrid conjugate gradient algorithm is the efficient one.
文摘In this paper, a new class of three term memory gradient method with non-monotone line search technique for unconstrained optimization is presented. Global convergence properties of the new methods are discussed. Combining the quasi-Newton method with the new method, the former is modified to have global convergence property. Numerical results show that the new algorithm is efficient.
文摘In the present paper we present a class of polynomial primal-dual interior-point algorithms for semidefmite optimization based on a kernel function. This kernel function is not a so-called self-regular function due to its growth term increasing linearly. Some new analysis tools were developed which can be used to deal with complexity "analysis of the algorithms which use analogous strategy in [5] to design the search directions for the Newton system. The complexity bounds for the algorithms with large- and small-update methodswere obtained, namely,O(qn^(p+q/q(P+1)log n/ε and O(q^2√n)log n/ε,respectlvely.
基金The NSF (10871220) of Chinathe Doctoral Foundation (Y080820) of China University of Petroleum
文摘Online gradient method has been widely used as a learning algorithm for training feedforward neural networks. Penalty is often introduced into the training procedure to improve the generalization performance and to decrease the magnitude of network weights. In this paper, some weight boundedness and deterministic con- vergence theorems are proved for the online gradient method with penalty for BP neural network with a hidden layer, assuming that the training samples are supplied with the network in a fixed order within each epoch. The monotonicity of the error function with penalty is also guaranteed in the training iteration. Simulation results for a 3-bits parity problem are presented to support our theoretical results.
基金supported by the Knut and Alice Wallenberg Foundationthe Swedish Foundation for Strategic Research+1 种基金the Swedish Research Councilthe National Natural Science Foundation of China(62133003,61991403,61991404,61991400)。
文摘The distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of n local cost functions by using local information exchange is considered.This problem is an important component of many machine learning techniques with data parallelism,such as deep learning and federated learning.We propose a distributed primal-dual stochastic gradient descent(SGD)algorithm,suitable for arbitrarily connected communication networks and any smooth(possibly nonconvex)cost functions.We show that the proposed algorithm achieves the linear speedup convergence rate O(1/(√nT))for general nonconvex cost functions and the linear speedup convergence rate O(1/(nT)) when the global cost function satisfies the Polyak-Lojasiewicz(P-L)condition,where T is the total number of iterations.We also show that the output of the proposed algorithm with constant parameters linearly converges to a neighborhood of a global optimum.We demonstrate through numerical experiments the efficiency of our algorithm in comparison with the baseline centralized SGD and recently proposed distributed SGD algorithms.
文摘This paper analyzes the characteristics of the output gradient histogram and shortages of several traditional automatic threshold methods in order to segment the gradient image better. Then an improved double-threshold method is proposed, which is combined with the method of maximum classes variance, estimating-area method and double-threshold method. This method can automatically select two different thresholds to segment gradient images. The computer simulation is performed on the traditional methods and this algorithm and proves that this method can get satisfying result. Key words gradient histogram image - threshold selection - double-threshold method - maximum classes variance method CLC number TP 391. 41 Foundation item: Supported by the National Nature Science Foundation of China (50099620) and the Project of Chenguang Plan in Wuhan (985003062)Biography: YANG Shen (1977-), female, Ph. D. candidate, research direction: multimedia information processing and network technology.
文摘Gradiently denitrated gun propellant(GDGP)prepared by a“gradient denitration”strategy is obviously superior in progressive burning performance to the traditional deterred gun propellant.Currently,the preparation of GDGP employed a tedious two-step method involving organic solvents,which hinders the large-scale preparation of GDGP.In this paper,GDGP was successfully prepared via a novelty and environmentally friendly one-step method.The obtained samples were characterized by FT-IR,Raman,SEM and XPS.The results showed that the content of nitrate groups gradiently increased from the surface to the core in the surface layer of GDGP and the surface layer of GDGP exhibited a higher compaction than that of raw gun propellant,with a well-preserved nitrocellulose structure.The denitration process enabled the propellant surface with regressive energy density and good progressive burning performance,as confirmed by oxygen bomb and closed bomb test.At the same time,the effects of different solvents on the component loss of propellant were compared.The result showed that water caused the least component loss.Finally,the stability of GDGP was confirmed by methyl-violet test.This work not only provided environmentally friendly,simple and economic preparation of GDGP,but also confirmed the stability of GDGP prepared by this method.
文摘The aim of the study was to prepare berberine hydrochloride long-circulating liposomes and optimize the formulation and process parameters,and investigate the influence of different factors on the encapsulation efficiency.Berberine hydrochloride liposomes were prepared in response to a transmembrane ion gradient that was established by ionophore A23187.Free and liposomal drug were separated by cation exchange resin,and then the amount of intraliposomal berberine hydrochloride was determined by UV spectrophotometry.The optimized encapsulation efficiency of berberine hydrochloride liposomes was 94.3%2.1%when the drug-to-lipid ratio was 1:20,and the mean diameter was 146.9 nm3.2 nm.As a result,the ionophore A23187-mediated ZnSO_(4)gradient method was suitable for the preparation of berberine hydrochloride liposomes that we could get the desired encapsulation efficiency and drug loading.
基金Project supported by the National Natural Science Foundation of China(Nos.5130926141030747+3 种基金41102181and 51121005)the National Basic Research Program of China(973 Program)(No.2011CB013503)the Young Teachers’ Initial Funding Scheme of Sun Yat-sen University(No.39000-1188140)
文摘Fast solving large-scale linear equations in the finite element analysis is a classical subject in computational mechanics. It is a key technique in computer aided engineering (CAE) and computer aided manufacturing (CAM). This paper presents a high-efficiency improved symmetric successive over-relaxation (ISSOR) preconditioned conjugate gradient (PCG) method, which maintains lelism consistent with the original form. Ideally, the by 50% as compared with the original algorithm. the convergence and inherent paralcomputation can It is suitable for be reduced nearly high-performance computing with its inherent basic high-efficiency operations. By comparing with the numerical results, it is shown that the proposed method has the best performance.
文摘Let C be a nonempty closed convex subset of a 2-uniformly convex and uniformly smooth Banach space E and {An}n∈N be a family of monotone and Lipschitz continuos mappings of C into E*. In this article, we consider the improved gradient method by the hybrid method in mathematical programming [i0] for solving the variational inequality problem for {AN} and prove strong convergence theorems. And we get several results which improve the well-known results in a real 2-uniformly convex and uniformly smooth Banach space and a real Hilbert space.
基金Supported by Research Council of Semnan University
文摘A hybridization of the three–term conjugate gradient method proposed by Zhang et al. and the nonlinear conjugate gradient method proposed by Polak and Ribi`ere, and Polyak is suggested. Based on an eigenvalue analysis, it is shown that search directions of the proposed method satisfy the sufficient descent condition, independent of the line search and the objective function convexity. Global convergence of the method is established under an Armijo–type line search condition. Numerical experiments show practical efficiency of the proposed method.
文摘A simple and effective method for analyzing the stress distribution in a Functionally Gradient Material(FGM) layer on the su;face of a structural component is proposed in this paper. Generally, the FGM layer is very thin compared with the characteristic length of the structural component, and the nonhomogeneity exists only in the thin layer. Based on these features, by choosing a small parameter I which characterizes the stiffness of the layer relative to the component, and expanding the stresses and displacements on the two sides of the interface according to the parameter lambda, then asymptotically using the continuity conditions of the stresses and displacements on the interface, a decoupling computing process of the coupling control equations of the layer and the structural component is realized. Finally, two examples are given to illustrate the application of the method proposed.
文摘Online gradient methods are widely used for training the weight of neural networks and for other engineering computations. In certain cases, the resulting weight may become very large, causing difficulties in the implementation of the network by electronic circuits. In this paper we introduce a punishing term into the error function of the training procedure to prevent this situation. The corresponding convergence of the iterative training procedure and the boundedness of the weight sequence are proved. A supporting numerical example is also provided.
文摘In this paper, a gradient method with momentum for sigma-pi-sigma neural networks (SPSNN) is considered in order to accelerate the convergence of the learning procedure for the network weights. The momentum coefficient is chosen in an adaptive manner, and the corresponding weak convergence and strong convergence results are proved.
文摘In this paper, a modified Polak-Ribière-Polyak conjugate gradient projection method is proposed for solving large scale nonlinear convex constrained monotone equations based on the projection method of Solodov and Svaiter. The obtained method has low-complexity property and converges globally. Furthermore, this method has also been extended to solve the sparse signal reconstruction in compressive sensing. Numerical experiments illustrate the efficiency of the given method and show that such non-monotone method is suitable for some large scale problems.