期刊文献+
共找到193篇文章
< 1 2 10 >
每页显示 20 50 100
A Rank-One Fitting Method with Descent Direction for Solving Symmetric Nonlinear Equations
1
作者 Gonglin YUAN Zhongxing WANG Zengxin WEI 《International Journal of Communications, Network and System Sciences》 2009年第6期555-561,共7页
In this paper, a rank-one updated method for solving symmetric nonlinear equations is proposed. This method possesses some features: 1) The updated matrix is positive definite whatever line search technique is used;2)... In this paper, a rank-one updated method for solving symmetric nonlinear equations is proposed. This method possesses some features: 1) The updated matrix is positive definite whatever line search technique is used;2) The search direction is descent for the norm function;3) The global convergence of the given method is established under reasonable conditions. Numerical results show that the presented method is interesting. 展开更多
关键词 Rank-One UPDATE global convergence Nonlinear EQUATIONS descent direction
下载PDF
Global Convergence of an Extended Descent Algorithm without Line Search for Unconstrained Optimization
2
作者 Cuiling Chen Liling Luo +1 位作者 Caihong Han Yu Chen 《Journal of Applied Mathematics and Physics》 2018年第1期130-137,共8页
In this paper, we extend a descent algorithm without line search for solving unconstrained optimization problems. Under mild conditions, its global convergence is established. Further, we generalize the search directi... In this paper, we extend a descent algorithm without line search for solving unconstrained optimization problems. Under mild conditions, its global convergence is established. Further, we generalize the search direction to more general form, and also obtain the global convergence of corresponding algorithm. The numerical results illustrate that the new algorithm is effective. 展开更多
关键词 UNCONSTRAINED optimization descent Method Line SEARCH global convergence
下载PDF
Global Convergence of Curve Search Methods for Unconstrained Optimization
3
作者 Zhiwei Xu Yongning Tang Zhen-Jun Shi 《Applied Mathematics》 2016年第7期721-735,共15页
In this paper we propose a new family of curve search methods for unconstrained optimization problems, which are based on searching a new iterate along a curve through the current iterate at each iteration, while line... In this paper we propose a new family of curve search methods for unconstrained optimization problems, which are based on searching a new iterate along a curve through the current iterate at each iteration, while line search methods are based on finding a new iterate on a line starting from the current iterate at each iteration. The global convergence and linear convergence rate of these curve search methods are investigated under some mild conditions. Numerical results show that some curve search methods are stable and effective in solving some large scale minimization problems. 展开更多
关键词 Unconstrained optimization Curve Search Method global convergence convergence rate
下载PDF
A Descent Gradient Method and Its Global Convergence
4
作者 LIU Jin-kui 《Chinese Quarterly Journal of Mathematics》 CSCD 2014年第1期142-150,共9页
Y Liu and C Storey(1992)proposed the famous LS conjugate gradient method which has good numerical results.However,the LS method has very weak convergence under the Wolfe-type line search.In this paper,we give a new de... Y Liu and C Storey(1992)proposed the famous LS conjugate gradient method which has good numerical results.However,the LS method has very weak convergence under the Wolfe-type line search.In this paper,we give a new descent gradient method based on the LS method.It can guarantee the sufficient descent property at each iteration and the global convergence under the strong Wolfe line search.Finally,we also present extensive preliminary numerical experiments to show the efficiency of the proposed method by comparing with the famous PRP^+method. 展开更多
关键词 unconstrained optimization conjugate gradient method strong Wolfe line search sufficient descent property global convergence
下载PDF
A New Descent Nonlinear Conjugate Gradient Method for Unconstrained Optimization
5
作者 Hao Fan Zhibin Zhu Anwa Zhou 《Applied Mathematics》 2011年第9期1119-1123,共5页
In this paper, a new nonlinear conjugate gradient method is proposed for large-scale unconstrained optimization. The sufficient descent property holds without any line searches. We use some steplength technique which ... In this paper, a new nonlinear conjugate gradient method is proposed for large-scale unconstrained optimization. The sufficient descent property holds without any line searches. We use some steplength technique which ensures the Zoutendijk condition to be held, this method is proved to be globally convergent. Finally, we improve it, and do further analysis. 展开更多
关键词 Large Scale UNCONSTRAINED optimization CONJUGATE Gradient Method SUFFICIENT descent Property globally CONVERGENT
下载PDF
CONVERGENCE RATE OF GRADIENT DESCENT METHOD FOR MULTI-OBJECTIVE OPTIMIZATION 被引量:1
6
作者 Liaoyuan Zeng Yuhong Dai Yakui Huang 《Journal of Computational Mathematics》 SCIE CSCD 2019年第5期689-703,共15页
The convergence rate of the gradient descent method is considered for unconstrained multi-objective optimization problems (MOP). Under standard assumptions, we prove that the gradient descent method with constant step... The convergence rate of the gradient descent method is considered for unconstrained multi-objective optimization problems (MOP). Under standard assumptions, we prove that the gradient descent method with constant stepsizes converges sublinearly when the objective functions are convex and the convergence rate can be strengthened to be linear if the objective functions are strongly convex. The results are also extended to the gradient descent method with the Armijo line search. Hence, we see that the gradient descent method for MOP enjoys the same convergence properties as those for scalar optimization. 展开更多
关键词 MULTI-OBJECTIVE optimization GRADIENT descent convergence rate.
原文传递
非凸多分块优化的Bregman ADMM的收敛率研究 被引量:1
7
作者 陈建华 彭建文 《数学物理学报(A辑)》 CSCD 北大核心 2024年第1期195-208,共14页
Wang等提出了求解带线性约束的多块可分非凸优化问题的带Bregman距离的交替方向乘子法(Bregman ADMM),并证明了其收敛性.该文将进一步研究求解带线性约束的多块可分非凸优化问题的Bregman ADMM的收敛率,以及算法产生的迭代点列有界的充... Wang等提出了求解带线性约束的多块可分非凸优化问题的带Bregman距离的交替方向乘子法(Bregman ADMM),并证明了其收敛性.该文将进一步研究求解带线性约束的多块可分非凸优化问题的Bregman ADMM的收敛率,以及算法产生的迭代点列有界的充分条件.在效益函数的Kurdyka-Lojasiewicz (KL)性质下,该文建立了值和迭代的收敛速率,证明了与目标函数相关的各种KL指数值可获得Bregman ADMM的三种不同收敛速度.更确切地说,该文证明了如下结果:如果效益函数的KL指数θ=0,那么由Bregman ADMM生成的序列经过有限次迭代后收敛;如果θ∈(0,1/2),那么Bregman ADMM是线性收敛的;如果θ∈(1/2,1),那么Bregman ADMM是次线性收敛的. 展开更多
关键词 非凸优化问题 交替方向乘子法 Kurdyka-Lojasiewicz性质 Bregman距离 收敛率 有界性
下载PDF
A Bregman-Style Improved ADMM and its Linearized Version in the Nonconvex Setting:Convergence and Rate Analyses
8
作者 Peng-Jie Liu Jin-Bao Jian +3 位作者 Hu Shao Xiao-Quan Wang Jia-Wei Xu Xiao-Yu Wu 《Journal of the Operations Research Society of China》 EI CSCD 2024年第2期298-340,共43页
This work explores a family of two-block nonconvex optimization problems subject to linear constraints.We first introduce a simple but universal Bregman-style improved alternating direction method of multipliers(ADMM)... This work explores a family of two-block nonconvex optimization problems subject to linear constraints.We first introduce a simple but universal Bregman-style improved alternating direction method of multipliers(ADMM)based on the iteration framework of ADMM and the Bregman distance.Then,we utilize the smooth performance of one of the components to develop a linearized version of it.Compared to the traditional ADMM,both proposed methods integrate a convex combination strategy into the multiplier update step.For each proposed method,we demonstrate the convergence of the entire iteration sequence to a unique critical point of the augmented Lagrangian function utilizing the powerful Kurdyka–Łojasiewicz property,and we also derive convergence rates for both the sequence of merit function values and the iteration sequence.Finally,some numerical results show that the proposed methods are effective and encouraging for the Lasso model. 展开更多
关键词 Nonconvex optimization Alternating direction method of multipliers Kurdyka-Lojasiewicz property convergence rate
原文传递
基于非下降线搜索的改进PRP共轭梯度方法及在图像恢复中的应用
9
作者 李朋原 《现代信息科技》 2024年第17期62-67,共6页
PRP方法是最有效的非线性共轭梯度优化方法之一,然而该方法不能保证产生目标函数的下降方向,这给一般函数的全局收敛带来了困难。为了保证PRP方法的全局收敛性,提出了一种改进的PRP共轭梯度方法。文章以非凸优化问题为目标,简要介绍了... PRP方法是最有效的非线性共轭梯度优化方法之一,然而该方法不能保证产生目标函数的下降方向,这给一般函数的全局收敛带来了困难。为了保证PRP方法的全局收敛性,提出了一种改进的PRP共轭梯度方法。文章以非凸优化问题为目标,简要介绍了非下降线搜索技术以及一些适当的假设条件,探讨了改进PRP方法的全局收敛性。基于MATLAB软件工具,验证了新方法在处理无约束优化和图像恢复问题时的有效性和实用性。 展开更多
关键词 共轭梯度方法 非下降线搜索 全局收敛性 无约束优化 图像修复
下载PDF
A Bregman-style Partially Symmetric Alternating Direction Method of Multipliers for Nonconvex Multi-block Optimization
10
作者 Peng-jie LIU Jin-bao JIAN Guo-dong MA 《Acta Mathematicae Applicatae Sinica》 SCIE CSCD 2023年第2期354-380,共27页
The alternating direction method of multipliers(ADMM)is one of the most successful and powerful methods for separable minimization optimization.Based on the idea of symmetric ADMM in two-block optimization,we add an u... The alternating direction method of multipliers(ADMM)is one of the most successful and powerful methods for separable minimization optimization.Based on the idea of symmetric ADMM in two-block optimization,we add an updating formula for the Lagrange multiplier without restricting its position for multiblock one.Then,combining with the Bregman distance,in this work,a Bregman-style partially symmetric ADMM is presented for nonconvex multi-block optimization with linear constraints,and the Lagrange multiplier is updated twice with different relaxation factors in the iteration scheme.Under the suitable conditions,the global convergence,strong convergence and convergence rate of the presented method are analyzed and obtained.Finally,some preliminary numerical results are reported to support the correctness of the theoretical assertions,and these show that the presented method is numerically effective. 展开更多
关键词 nonconvex optimization multi-block optimization alternating direction method with multipliers Kurdyka-Lojasiewicz property convergence rate
原文传递
A Symmetric Linearized Alternating Direction Method of Multipliers for a Class of Stochastic Optimization Problems
11
作者 Jia HU Qimin HU 《Journal of Systems Science and Information》 CSCD 2023年第1期58-77,共20页
Alternating direction method of multipliers(ADMM)receives much attention in the recent years due to various demands from machine learning and big data related optimization.In 2013,Ouyang et al.extend the ADMM to the s... Alternating direction method of multipliers(ADMM)receives much attention in the recent years due to various demands from machine learning and big data related optimization.In 2013,Ouyang et al.extend the ADMM to the stochastic setting for solving some stochastic optimization problems,inspired by the structural risk minimization principle.In this paper,we consider a stochastic variant of symmetric ADMM,named symmetric stochastic linearized ADMM(SSL-ADMM).In particular,using the framework of variational inequality,we analyze the convergence properties of SSL-ADMM.Moreover,we show that,with high probability,SSL-ADMM has O((ln N)·N^(-1/2))constraint violation bound and objective error bound for convex problems,and has O((ln N)^(2)·N^(-1))constraint violation bound and objective error bound for strongly convex problems,where N is the iteration number.Symmetric ADMM can improve the algorithmic performance compared to classical ADMM,numerical experiments for statistical machine learning show that such an improvement is also present in the stochastic setting. 展开更多
关键词 alternating direction method of multipliers stochastic approximation expected convergence rate and high probability bound convex optimization machine learning
原文传递
A Coordinate Gradient Descent Method for Nonsmooth Nonseparable Minimization 被引量:9
12
作者 Zheng-Jian Bai Michael K. Ng Liqun Qi 《Numerical Mathematics(Theory,Methods and Applications)》 SCIE 2009年第4期377-402,共26页
This paper presents a coordinate gradient descent approach for minimizing the sum of a smooth function and a nonseparable convex function.We find a search direction by solving a subproblem obtained by a second-order a... This paper presents a coordinate gradient descent approach for minimizing the sum of a smooth function and a nonseparable convex function.We find a search direction by solving a subproblem obtained by a second-order approximation of the smooth function and adding a separable convex function.Under a local Lipschitzian error bound assumption,we show that the algorithm possesses global and local linear convergence properties.We also give some numerical tests(including image recovery examples) to illustrate the efficiency of the proposed method. 展开更多
关键词 Coordinate descent global convergence linear convergence rate
下载PDF
A modified three–term conjugate gradient method with sufficient descent property 被引量:1
13
作者 Saman Babaie–Kafaki 《Applied Mathematics(A Journal of Chinese Universities)》 SCIE CSCD 2015年第3期263-272,共10页
A hybridization of the three–term conjugate gradient method proposed by Zhang et al. and the nonlinear conjugate gradient method proposed by Polak and Ribi`ere, and Polyak is suggested. Based on an eigenvalue analysi... A hybridization of the three–term conjugate gradient method proposed by Zhang et al. and the nonlinear conjugate gradient method proposed by Polak and Ribi`ere, and Polyak is suggested. Based on an eigenvalue analysis, it is shown that search directions of the proposed method satisfy the sufficient descent condition, independent of the line search and the objective function convexity. Global convergence of the method is established under an Armijo–type line search condition. Numerical experiments show practical efficiency of the proposed method. 展开更多
关键词 unconstrained optimization conjugate gradient method EIGENVALUE sufficient descent condition global convergence
下载PDF
A New Nonlinear Conjugate Gradient Method for Unconstrained Optimization Problems 被引量:1
14
作者 LIU Jin-kui WANG Kai-rong +1 位作者 SONG Xiao-qian DU Xiang-lin 《Chinese Quarterly Journal of Mathematics》 CSCD 2010年第3期444-450,共7页
In this paper,an efficient conjugate gradient method is given to solve the general unconstrained optimization problems,which can guarantee the sufficient descent property and the global convergence with the strong Wol... In this paper,an efficient conjugate gradient method is given to solve the general unconstrained optimization problems,which can guarantee the sufficient descent property and the global convergence with the strong Wolfe line search conditions.Numerical results show that the new method is efficient and stationary by comparing with PRP+ method,so it can be widely used in scientific computation. 展开更多
关键词 unconstrained optimization conjugate gradient method strong Wolfe line search sufficient descent property global convergence
下载PDF
Three New Hybrid Conjugate Gradient Methods for Optimization
15
作者 Anwa Zhou Zhibin Zhu +1 位作者 Hao Fan Qian Qing 《Applied Mathematics》 2011年第3期303-308,共6页
In this paper, three new hybrid nonlinear conjugate gradient methods are presented, which produce suf?cient descent search direction at every iteration. This property is independent of any line search or the convexity... In this paper, three new hybrid nonlinear conjugate gradient methods are presented, which produce suf?cient descent search direction at every iteration. This property is independent of any line search or the convexity of the objective function used. Under suitable conditions, we prove that the proposed methods converge globally for general nonconvex functions. The numerical results show that all these three new hybrid methods are efficient for the given test problems. 展开更多
关键词 CONJUGATE GRADIENT Method descent direction global convergence
下载PDF
PRP-Type Direct Search Methods for Unconstrained Optimization
16
作者 Qunfeng Liu Wanyou Cheng 《Applied Mathematics》 2011年第6期725-731,共7页
Three PRP-type direct search methods for unconstrained optimization are presented. The methods adopt three kinds of recently developed descent conjugate gradient methods and the idea of frame-based direct search metho... Three PRP-type direct search methods for unconstrained optimization are presented. The methods adopt three kinds of recently developed descent conjugate gradient methods and the idea of frame-based direct search method. Global convergence is shown for continuously differentiable functions. Data profile and performance profile are adopted to analyze the numerical experiments and the results show that the proposed methods are effective. 展开更多
关键词 Direct Search METHODS descent CONJUGATE Gradient METHODS Frame-Based METHODS global convergence Data PRofILE Performance PRofILE
下载PDF
两个带重启方向的改进HS型共轭梯度法 被引量:3
17
作者 刘鹏杰 吴彦强 +2 位作者 邵枫 张艳 邵虎 《数学物理学报(A辑)》 CSCD 北大核心 2023年第2期570-580,共11页
共轭梯度法是求解大规模无约束优化的有效方法之一.该文首先对Hestenes-Stiefel(HS)共轭参数改进,再通过引入重启条件及重启方向,建立两个带重启方向的改进HS型共轭梯度法.第一个方法在弱Wolfe线搜索下产生下降方向,第二个方法独立于任... 共轭梯度法是求解大规模无约束优化的有效方法之一.该文首先对Hestenes-Stiefel(HS)共轭参数改进,再通过引入重启条件及重启方向,建立两个带重启方向的改进HS型共轭梯度法.第一个方法在弱Wolfe线搜索下产生下降方向,第二个方法独立于任何线搜索得到充分下降性.常规假设下,分析并获得两个新方法的全局收敛性.最后,数值比对试验结果及性能图显示新方法是有效的. 展开更多
关键词 无约束优化 共轭梯度法 重启方向 弱Wolfe线搜索 全局收敛性
下载PDF
On the Sublinear Convergence Rate of Multi-block ADMM 被引量:6
18
作者 Tian-Yi Lin Shi-Qian Ma Shu-Zhong Zhang 《Journal of the Operations Research Society of China》 EI CSCD 2015年第3期251-274,共24页
The alternating direction method of multipliers(ADMM)is widely used in solving structured convex optimization problems.Despite its success in practice,the convergence of the standard ADMM for minimizing the sum of N(N... The alternating direction method of multipliers(ADMM)is widely used in solving structured convex optimization problems.Despite its success in practice,the convergence of the standard ADMM for minimizing the sum of N(N≥3)convex functions,whose variables are linked by linear constraints,has remained unclear for a very long time.Recently,Chen et al.(Math Program,doi:10.1007/s10107-014-0826-5,2014)provided a counter-example showing that the ADMM for N≥3 may fail to converge without further conditions.Since the ADMM for N≥3 has been very successful when applied to many problems arising from real practice,it is worth further investigating under what kind of sufficient conditions it can be guaranteed to converge.In this paper,we present such sufficient conditions that can guarantee the sublinear convergence rate for the ADMM for N≥3.Specifically,we show that if one of the functions is convex(not necessarily strongly convex)and the other N-1 functions are strongly convex,and the penalty parameter lies in a certain region,the ADMM converges with rate O(1/t)in a certain ergodic sense and o(1/t)in a certain non-ergodic sense,where t denotes the number of iterations.As a by-product,we also provide a simple proof for the O(1/t)convergence rate of two-blockADMMin terms of both objective error and constraint violation,without assuming any condition on the penalty parameter and strong convexity on the functions. 展开更多
关键词 Alternating direction method of multipliers Sublinear convergence rate Convex optimization
原文传递
A NEWε-GENERALIZED PROJECTION METHOD OF STRONGLY SUB-FEASIBLE DIRECTIONS FOR INEQUALITY CONSTRAINED OPTIMIZATION 被引量:3
19
作者 Jinbao JIAN Guodong MA Chuanhao GUO 《Journal of Systems Science & Complexity》 SCIE EI CSCD 2011年第3期604-618,共15页
In this paper, the nonlinear optimization problems with inequality constraints are discussed. Combining the ideas of the strongly sub-feasible directions method and the s-generalized projection technique, a new algori... In this paper, the nonlinear optimization problems with inequality constraints are discussed. Combining the ideas of the strongly sub-feasible directions method and the s-generalized projection technique, a new algorithm starting with an arbitrary initial iteration point for the discussed problems is presented. At each iteration, the search direction is generated by a new s-generalized projection explicit formula, and the step length is yielded by a new Armijo line search. Under some necessary assumptions, not only the algorithm possesses global and strong convergence, but also the iterative points always get into the feasible set after finite iterations. Finally, some preliminary numerical results are reported. 展开更多
关键词 E-generalized projection global and strong convergence inequality constraints method of strongly sub-feasible directions optimization.
原文传递
共轭下降法的一个改进及收敛性分析
20
作者 余文进 张劲松 《高师理科学刊》 2023年第8期21-24,共4页
研究了共轭下降法,做出了较为恰当的改进.步长规则采用Wolfe线搜索,在较弱的条件下证明了算法的全局收敛性,并给出了数值实验.结果表明,改进后的算法是有效的.
关键词 无约束优化 共轭下降法 WOLFE线搜索 全局收敛性
下载PDF
上一页 1 2 10 下一页 到第
使用帮助 返回顶部