As a generalization of the two-term conjugate gradient method(CGM),the spectral CGM is one of the effective methods for solving unconstrained optimization.In this paper,we enhance the JJSL conjugate parameter,initiall...As a generalization of the two-term conjugate gradient method(CGM),the spectral CGM is one of the effective methods for solving unconstrained optimization.In this paper,we enhance the JJSL conjugate parameter,initially proposed by Jiang et al.(Computational and Applied Mathematics,2021,40:174),through the utilization of a convex combination technique.And this improvement allows for an adaptive search direction by integrating a newly constructed spectral gradient-type restart strategy.Then,we develop a new spectral CGM by employing an inexact line search to determine the step size.With the application of the weak Wolfe line search,we establish the sufficient descent property of the proposed search direction.Moreover,under general assumptions,including the employment of the strong Wolfe line search for step size calculation,we demonstrate the global convergence of our new algorithm.Finally,the given unconstrained optimization test results show that the new algorithm is effective.展开更多
Proximal gradient descent and its accelerated version are resultful methods for solving the sum of smooth and non-smooth problems. When the smooth function can be represented as a sum of multiple functions, the stocha...Proximal gradient descent and its accelerated version are resultful methods for solving the sum of smooth and non-smooth problems. When the smooth function can be represented as a sum of multiple functions, the stochastic proximal gradient method performs well. However, research on its accelerated version remains unclear. This paper proposes a proximal stochastic accelerated gradient (PSAG) method to address problems involving a combination of smooth and non-smooth components, where the smooth part corresponds to the average of multiple block sums. Simultaneously, most of convergence analyses hold in expectation. To this end, under some mind conditions, we present an almost sure convergence of unbiased gradient estimation in the non-smooth setting. Moreover, we establish that the minimum of the squared gradient mapping norm arbitrarily converges to zero with probability one.展开更多
The conventional gravity gradient method to plot the geologic body location is fuzzy. When the depth is large and the geologic body is small, the Vzz and Vzx derivative errors are also large. We describe that using th...The conventional gravity gradient method to plot the geologic body location is fuzzy. When the depth is large and the geologic body is small, the Vzz and Vzx derivative errors are also large. We describe that using the status distinguishing factor to optimally determine the comer location is more accurate than the conventional higher-order derivative method. Thus, a better small geologic body and fault resolution is obtained by using the gravity gradient method and trial theoretical model calculation. The actual data is better processed, providing a better basis for prospecting and determination of subsurface geologic structure.展开更多
Spectral conjugate gradient method is an algorithm obtained by combination of spectral gradient method and conjugate gradient method,which is characterized with global convergence and simplicity of spectral gradient m...Spectral conjugate gradient method is an algorithm obtained by combination of spectral gradient method and conjugate gradient method,which is characterized with global convergence and simplicity of spectral gradient method,and small storage of conjugate gradient method.Besides,the spectral conjugate gradient method was proved that the search direction at each iteration is a descent direction of objective function even without relying on any line search method.Spectral conjugate gradient method is applied to full waveform inversion for numerical tests on Marmousi model.The authors give a comparison on numerical results obtained by steepest descent method,conjugate gradient method and spectral conjugate gradient method,which shows that the spectral conjugate gradient method is superior to the other two methods.展开更多
In this paper, a new class of three term memory gradient method with non-monotone line search technique for unconstrained optimization is presented. Global convergence properties of the new methods are discussed. Comb...In this paper, a new class of three term memory gradient method with non-monotone line search technique for unconstrained optimization is presented. Global convergence properties of the new methods are discussed. Combining the quasi-Newton method with the new method, the former is modified to have global convergence property. Numerical results show that the new algorithm is efficient.展开更多
The aim of the study was to prepare berberine hydrochloride long-circulating liposomes and optimize the formulation and process parameters,and investigate the influence of different factors on the encapsulation effici...The aim of the study was to prepare berberine hydrochloride long-circulating liposomes and optimize the formulation and process parameters,and investigate the influence of different factors on the encapsulation efficiency.Berberine hydrochloride liposomes were prepared in response to a transmembrane ion gradient that was established by ionophore A23187.Free and liposomal drug were separated by cation exchange resin,and then the amount of intraliposomal berberine hydrochloride was determined by UV spectrophotometry.The optimized encapsulation efficiency of berberine hydrochloride liposomes was 94.3%2.1%when the drug-to-lipid ratio was 1:20,and the mean diameter was 146.9 nm3.2 nm.As a result,the ionophore A23187-mediated ZnSO_(4)gradient method was suitable for the preparation of berberine hydrochloride liposomes that we could get the desired encapsulation efficiency and drug loading.展开更多
Online gradient method has been widely used as a learning algorithm for training feedforward neural networks. Penalty is often introduced into the training procedure to improve the generalization performance and to de...Online gradient method has been widely used as a learning algorithm for training feedforward neural networks. Penalty is often introduced into the training procedure to improve the generalization performance and to decrease the magnitude of network weights. In this paper, some weight boundedness and deterministic con- vergence theorems are proved for the online gradient method with penalty for BP neural network with a hidden layer, assuming that the training samples are supplied with the network in a fixed order within each epoch. The monotonicity of the error function with penalty is also guaranteed in the training iteration. Simulation results for a 3-bits parity problem are presented to support our theoretical results.展开更多
A hybridization of the three–term conjugate gradient method proposed by Zhang et al. and the nonlinear conjugate gradient method proposed by Polak and Ribi`ere, and Polyak is suggested. Based on an eigenvalue analysi...A hybridization of the three–term conjugate gradient method proposed by Zhang et al. and the nonlinear conjugate gradient method proposed by Polak and Ribi`ere, and Polyak is suggested. Based on an eigenvalue analysis, it is shown that search directions of the proposed method satisfy the sufficient descent condition, independent of the line search and the objective function convexity. Global convergence of the method is established under an Armijo–type line search condition. Numerical experiments show practical efficiency of the proposed method.展开更多
Let C be a nonempty closed convex subset of a 2-uniformly convex and uniformly smooth Banach space E and {An}n∈N be a family of monotone and Lipschitz continuos mappings of C into E*. In this article, we consider th...Let C be a nonempty closed convex subset of a 2-uniformly convex and uniformly smooth Banach space E and {An}n∈N be a family of monotone and Lipschitz continuos mappings of C into E*. In this article, we consider the improved gradient method by the hybrid method in mathematical programming [i0] for solving the variational inequality problem for {AN} and prove strong convergence theorems. And we get several results which improve the well-known results in a real 2-uniformly convex and uniformly smooth Banach space and a real Hilbert space.展开更多
Fast solving large-scale linear equations in the finite element analysis is a classical subject in computational mechanics. It is a key technique in computer aided engineering (CAE) and computer aided manufacturing ...Fast solving large-scale linear equations in the finite element analysis is a classical subject in computational mechanics. It is a key technique in computer aided engineering (CAE) and computer aided manufacturing (CAM). This paper presents a high-efficiency improved symmetric successive over-relaxation (ISSOR) preconditioned conjugate gradient (PCG) method, which maintains lelism consistent with the original form. Ideally, the by 50% as compared with the original algorithm. the convergence and inherent paralcomputation can It is suitable for be reduced nearly high-performance computing with its inherent basic high-efficiency operations. By comparing with the numerical results, it is shown that the proposed method has the best performance.展开更多
In one step inverse finite element approach, an initial blank shape is normally predicted from the final deformed shape. The final deformed shape needs to be trimmed into a final part after stamping, the trimmed area,...In one step inverse finite element approach, an initial blank shape is normally predicted from the final deformed shape. The final deformed shape needs to be trimmed into a final part after stamping, the trimmed area, therefore, needs to be compensated manually before using one step inverse approach, which causes low efficiency and in consistency with the real situation. To solve this problem, one step positive approach is proposed to simulate the sheet metal stamping process. Firstly the spatial initial solution of one step positive method is preliminarily obtained by using the mapping relationship and area coordinates, then based on the deformation theory the iterative solving is carried out in three-dimensional coordinate system by using quasi-conjugate-gradient method. During iterative process the contact judgment method is introduced to ensure that the nodes on the spatial initial solution are not separated from die surface. The predicted results of sheet metal forming process that include the shape and thickness of the stamped part can be obtained after the iterative solving process. The validity of the proposed approach is verified by comparing the predicted results obtained through the proposed approach with those obtained through the module of one step inverse approach in Autoform and the real stamped part. In one step positive method, the stamped shape of regular sheet can be calculated fast and effectively. During the iterative solution, the quasi-conjugate-gradient method is proposed to take the place of solving system of equations, and it can improve the stability and precision of the algorithm.展开更多
Online gradient methods are widely used for training the weight of neural networks and for other engineering computations. In certain cases, the resulting weight may become very large, causing difficulties in the impl...Online gradient methods are widely used for training the weight of neural networks and for other engineering computations. In certain cases, the resulting weight may become very large, causing difficulties in the implementation of the network by electronic circuits. In this paper we introduce a punishing term into the error function of the training procedure to prevent this situation. The corresponding convergence of the iterative training procedure and the boundedness of the weight sequence are proved. A supporting numerical example is also provided.展开更多
Conjugate gradient optimization algorithms depend on the search directions with different choices for the parameters in the search directions. In this note, by combining the nice numerical performance of PR and HS met...Conjugate gradient optimization algorithms depend on the search directions with different choices for the parameters in the search directions. In this note, by combining the nice numerical performance of PR and HS methods with the global convergence property of the class of conjugate gradient methods presented by HU and STOREY(1991), a class of new restarting conjugate gradient methods is presented. Global convergences of the new method with two kinds of common line searches, are proved. Firstly, it is shown that, using reverse modulus of continuity function and forcing function, the new method for solving unconstrained optimization can work for a continously dif ferentiable function with Curry-Altman's step size rule and a bounded level set. Secondly, by using comparing technique, some general convergence properties of the new method with other kind of step size rule are established. Numerical experiments show that the new method is efficient by comparing with FR conjugate gradient method.展开更多
In this paper,an efficient conjugate gradient method is given to solve the general unconstrained optimization problems,which can guarantee the sufficient descent property and the global convergence with the strong Wol...In this paper,an efficient conjugate gradient method is given to solve the general unconstrained optimization problems,which can guarantee the sufficient descent property and the global convergence with the strong Wolfe line search conditions.Numerical results show that the new method is efficient and stationary by comparing with PRP+ method,so it can be widely used in scientific computation.展开更多
In this note,by combining the nice numerical performance of PR and HS methods with the global convergence property of FR method,a class of new restarting three terms conjugate gradient methods is presented.Global conv...In this note,by combining the nice numerical performance of PR and HS methods with the global convergence property of FR method,a class of new restarting three terms conjugate gradient methods is presented.Global convergence properties of the new method with two kinds of common line searches are proved.展开更多
Many methods have been put forward to solve unconstrained optimization problems,among which conjugate gradient method(CG)is very important.With the increasing emergence of large⁃scale problems,the subspace technology ...Many methods have been put forward to solve unconstrained optimization problems,among which conjugate gradient method(CG)is very important.With the increasing emergence of large⁃scale problems,the subspace technology has become particularly important and widely used in the field of optimization.In this study,a new CG method was put forward,which combined subspace technology and a cubic regularization model.Besides,a special scaled norm in a cubic regularization model was analyzed.Under certain conditions,some significant characteristics of the search direction were given and the convergence of the algorithm was built.Numerical comparisons show that for the 145 test functions under the CUTEr library,the proposed method is better than two classical CG methods and two new subspaces conjugate gradient methods.展开更多
In this paper, a modified Polak-Ribière-Polyak conjugate gradient projection method is proposed for solving large scale nonlinear convex constrained monotone equations based on the projection method of Solodov an...In this paper, a modified Polak-Ribière-Polyak conjugate gradient projection method is proposed for solving large scale nonlinear convex constrained monotone equations based on the projection method of Solodov and Svaiter. The obtained method has low-complexity property and converges globally. Furthermore, this method has also been extended to solve the sparse signal reconstruction in compressive sensing. Numerical experiments illustrate the efficiency of the given method and show that such non-monotone method is suitable for some large scale problems.展开更多
In this paper two theorems with theoretical and practical significance are given in respect to the preconditioned conjugate gradient method (PCCG). The theorems discuss respectively the qualitative property of the ite...In this paper two theorems with theoretical and practical significance are given in respect to the preconditioned conjugate gradient method (PCCG). The theorems discuss respectively the qualitative property of the iterative solution and the construction principle of the iterative matrix. The authors put forward a new incompletely LU factorizing technique for non-M-matrix and the method of constructing the iterative matrix. This improved PCCG is used to calculate the ill-conditioned problems and large-scale three-dimensional finite element problems, and simultaneously contrasted with other methods. The abnormal phenomenon is analyzed when PCCG is used to solve the system of ill-conditioned equations, ft is shown that the method proposed in this paper is quite effective in solving the system of large-scale finite element equations and the system of ill-conditioned equations.展开更多
基金supported by the National Natural Science Foundation of China (No.72071202)the Key Laboratory of Mathematics and Engineering ApplicationsMinistry of Education。
文摘As a generalization of the two-term conjugate gradient method(CGM),the spectral CGM is one of the effective methods for solving unconstrained optimization.In this paper,we enhance the JJSL conjugate parameter,initially proposed by Jiang et al.(Computational and Applied Mathematics,2021,40:174),through the utilization of a convex combination technique.And this improvement allows for an adaptive search direction by integrating a newly constructed spectral gradient-type restart strategy.Then,we develop a new spectral CGM by employing an inexact line search to determine the step size.With the application of the weak Wolfe line search,we establish the sufficient descent property of the proposed search direction.Moreover,under general assumptions,including the employment of the strong Wolfe line search for step size calculation,we demonstrate the global convergence of our new algorithm.Finally,the given unconstrained optimization test results show that the new algorithm is effective.
文摘Proximal gradient descent and its accelerated version are resultful methods for solving the sum of smooth and non-smooth problems. When the smooth function can be represented as a sum of multiple functions, the stochastic proximal gradient method performs well. However, research on its accelerated version remains unclear. This paper proposes a proximal stochastic accelerated gradient (PSAG) method to address problems involving a combination of smooth and non-smooth components, where the smooth part corresponds to the average of multiple block sums. Simultaneously, most of convergence analyses hold in expectation. To this end, under some mind conditions, we present an almost sure convergence of unbiased gradient estimation in the non-smooth setting. Moreover, we establish that the minimum of the squared gradient mapping norm arbitrarily converges to zero with probability one.
基金support by the "Eleventh Five-Year" National Science and Technology Support Program (No. 2006BAB01A02)the Pivot Program of the National Natural Science Fund (No. 40930314)
文摘The conventional gravity gradient method to plot the geologic body location is fuzzy. When the depth is large and the geologic body is small, the Vzz and Vzx derivative errors are also large. We describe that using the status distinguishing factor to optimally determine the comer location is more accurate than the conventional higher-order derivative method. Thus, a better small geologic body and fault resolution is obtained by using the gravity gradient method and trial theoretical model calculation. The actual data is better processed, providing a better basis for prospecting and determination of subsurface geologic structure.
文摘Spectral conjugate gradient method is an algorithm obtained by combination of spectral gradient method and conjugate gradient method,which is characterized with global convergence and simplicity of spectral gradient method,and small storage of conjugate gradient method.Besides,the spectral conjugate gradient method was proved that the search direction at each iteration is a descent direction of objective function even without relying on any line search method.Spectral conjugate gradient method is applied to full waveform inversion for numerical tests on Marmousi model.The authors give a comparison on numerical results obtained by steepest descent method,conjugate gradient method and spectral conjugate gradient method,which shows that the spectral conjugate gradient method is superior to the other two methods.
文摘In this paper, a new class of three term memory gradient method with non-monotone line search technique for unconstrained optimization is presented. Global convergence properties of the new methods are discussed. Combining the quasi-Newton method with the new method, the former is modified to have global convergence property. Numerical results show that the new algorithm is efficient.
文摘The aim of the study was to prepare berberine hydrochloride long-circulating liposomes and optimize the formulation and process parameters,and investigate the influence of different factors on the encapsulation efficiency.Berberine hydrochloride liposomes were prepared in response to a transmembrane ion gradient that was established by ionophore A23187.Free and liposomal drug were separated by cation exchange resin,and then the amount of intraliposomal berberine hydrochloride was determined by UV spectrophotometry.The optimized encapsulation efficiency of berberine hydrochloride liposomes was 94.3%2.1%when the drug-to-lipid ratio was 1:20,and the mean diameter was 146.9 nm3.2 nm.As a result,the ionophore A23187-mediated ZnSO_(4)gradient method was suitable for the preparation of berberine hydrochloride liposomes that we could get the desired encapsulation efficiency and drug loading.
基金The NSF (10871220) of Chinathe Doctoral Foundation (Y080820) of China University of Petroleum
文摘Online gradient method has been widely used as a learning algorithm for training feedforward neural networks. Penalty is often introduced into the training procedure to improve the generalization performance and to decrease the magnitude of network weights. In this paper, some weight boundedness and deterministic con- vergence theorems are proved for the online gradient method with penalty for BP neural network with a hidden layer, assuming that the training samples are supplied with the network in a fixed order within each epoch. The monotonicity of the error function with penalty is also guaranteed in the training iteration. Simulation results for a 3-bits parity problem are presented to support our theoretical results.
基金Supported by Research Council of Semnan University
文摘A hybridization of the three–term conjugate gradient method proposed by Zhang et al. and the nonlinear conjugate gradient method proposed by Polak and Ribi`ere, and Polyak is suggested. Based on an eigenvalue analysis, it is shown that search directions of the proposed method satisfy the sufficient descent condition, independent of the line search and the objective function convexity. Global convergence of the method is established under an Armijo–type line search condition. Numerical experiments show practical efficiency of the proposed method.
文摘Let C be a nonempty closed convex subset of a 2-uniformly convex and uniformly smooth Banach space E and {An}n∈N be a family of monotone and Lipschitz continuos mappings of C into E*. In this article, we consider the improved gradient method by the hybrid method in mathematical programming [i0] for solving the variational inequality problem for {AN} and prove strong convergence theorems. And we get several results which improve the well-known results in a real 2-uniformly convex and uniformly smooth Banach space and a real Hilbert space.
基金Project supported by the National Natural Science Foundation of China(Nos.5130926141030747+3 种基金41102181and 51121005)the National Basic Research Program of China(973 Program)(No.2011CB013503)the Young Teachers’ Initial Funding Scheme of Sun Yat-sen University(No.39000-1188140)
文摘Fast solving large-scale linear equations in the finite element analysis is a classical subject in computational mechanics. It is a key technique in computer aided engineering (CAE) and computer aided manufacturing (CAM). This paper presents a high-efficiency improved symmetric successive over-relaxation (ISSOR) preconditioned conjugate gradient (PCG) method, which maintains lelism consistent with the original form. Ideally, the by 50% as compared with the original algorithm. the convergence and inherent paralcomputation can It is suitable for be reduced nearly high-performance computing with its inherent basic high-efficiency operations. By comparing with the numerical results, it is shown that the proposed method has the best performance.
基金supported by National Natural Science Foundation of China (Grant No. 51075187)
文摘In one step inverse finite element approach, an initial blank shape is normally predicted from the final deformed shape. The final deformed shape needs to be trimmed into a final part after stamping, the trimmed area, therefore, needs to be compensated manually before using one step inverse approach, which causes low efficiency and in consistency with the real situation. To solve this problem, one step positive approach is proposed to simulate the sheet metal stamping process. Firstly the spatial initial solution of one step positive method is preliminarily obtained by using the mapping relationship and area coordinates, then based on the deformation theory the iterative solving is carried out in three-dimensional coordinate system by using quasi-conjugate-gradient method. During iterative process the contact judgment method is introduced to ensure that the nodes on the spatial initial solution are not separated from die surface. The predicted results of sheet metal forming process that include the shape and thickness of the stamped part can be obtained after the iterative solving process. The validity of the proposed approach is verified by comparing the predicted results obtained through the proposed approach with those obtained through the module of one step inverse approach in Autoform and the real stamped part. In one step positive method, the stamped shape of regular sheet can be calculated fast and effectively. During the iterative solution, the quasi-conjugate-gradient method is proposed to take the place of solving system of equations, and it can improve the stability and precision of the algorithm.
文摘Online gradient methods are widely used for training the weight of neural networks and for other engineering computations. In certain cases, the resulting weight may become very large, causing difficulties in the implementation of the network by electronic circuits. In this paper we introduce a punishing term into the error function of the training procedure to prevent this situation. The corresponding convergence of the iterative training procedure and the boundedness of the weight sequence are proved. A supporting numerical example is also provided.
文摘Conjugate gradient optimization algorithms depend on the search directions with different choices for the parameters in the search directions. In this note, by combining the nice numerical performance of PR and HS methods with the global convergence property of the class of conjugate gradient methods presented by HU and STOREY(1991), a class of new restarting conjugate gradient methods is presented. Global convergences of the new method with two kinds of common line searches, are proved. Firstly, it is shown that, using reverse modulus of continuity function and forcing function, the new method for solving unconstrained optimization can work for a continously dif ferentiable function with Curry-Altman's step size rule and a bounded level set. Secondly, by using comparing technique, some general convergence properties of the new method with other kind of step size rule are established. Numerical experiments show that the new method is efficient by comparing with FR conjugate gradient method.
基金Supported by the Fund of Chongqing Education Committee(KJ091104)
文摘In this paper,an efficient conjugate gradient method is given to solve the general unconstrained optimization problems,which can guarantee the sufficient descent property and the global convergence with the strong Wolfe line search conditions.Numerical results show that the new method is efficient and stationary by comparing with PRP+ method,so it can be widely used in scientific computation.
基金Supported by the National Natural Science Foundation of China(10571106) Supported by the Fundamental Research Funds for the Central Universities(10CX04044A)
文摘In this note,by combining the nice numerical performance of PR and HS methods with the global convergence property of FR method,a class of new restarting three terms conjugate gradient methods is presented.Global convergence properties of the new method with two kinds of common line searches are proved.
基金Sponsored by the National Natural Science Foundation of China(Grant No.11901561).
文摘Many methods have been put forward to solve unconstrained optimization problems,among which conjugate gradient method(CG)is very important.With the increasing emergence of large⁃scale problems,the subspace technology has become particularly important and widely used in the field of optimization.In this study,a new CG method was put forward,which combined subspace technology and a cubic regularization model.Besides,a special scaled norm in a cubic regularization model was analyzed.Under certain conditions,some significant characteristics of the search direction were given and the convergence of the algorithm was built.Numerical comparisons show that for the 145 test functions under the CUTEr library,the proposed method is better than two classical CG methods and two new subspaces conjugate gradient methods.
文摘In this paper, a modified Polak-Ribière-Polyak conjugate gradient projection method is proposed for solving large scale nonlinear convex constrained monotone equations based on the projection method of Solodov and Svaiter. The obtained method has low-complexity property and converges globally. Furthermore, this method has also been extended to solve the sparse signal reconstruction in compressive sensing. Numerical experiments illustrate the efficiency of the given method and show that such non-monotone method is suitable for some large scale problems.
文摘In this paper two theorems with theoretical and practical significance are given in respect to the preconditioned conjugate gradient method (PCCG). The theorems discuss respectively the qualitative property of the iterative solution and the construction principle of the iterative matrix. The authors put forward a new incompletely LU factorizing technique for non-M-matrix and the method of constructing the iterative matrix. This improved PCCG is used to calculate the ill-conditioned problems and large-scale three-dimensional finite element problems, and simultaneously contrasted with other methods. The abnormal phenomenon is analyzed when PCCG is used to solve the system of ill-conditioned equations, ft is shown that the method proposed in this paper is quite effective in solving the system of large-scale finite element equations and the system of ill-conditioned equations.