期刊文献+
共找到21篇文章
< 1 2 >
每页显示 20 50 100
Proximal Methods for Elliptic Optimal Control Problems with Sparsity Cost Functional 被引量:2
1
作者 Andreas Schindele Alfio Borzì 《Applied Mathematics》 2016年第9期967-992,共26页
First-order proximal methods that solve linear and bilinear elliptic optimal control problems with a sparsity cost functional are discussed. In particular, fast convergence of these methods is proved. For benchmarking... First-order proximal methods that solve linear and bilinear elliptic optimal control problems with a sparsity cost functional are discussed. In particular, fast convergence of these methods is proved. For benchmarking purposes, inexact proximal schemes are compared to an inexact semismooth Newton method. Results of numerical experiments are presented to demonstrate the computational effectiveness of proximal schemes applied to infinite-dimensional elliptic optimal control problems and to validate the theoretical estimates. 展开更多
关键词 Optimal Control Elliptic PDE Nonsmooth Optimization proximal method Semismooth Newton method
下载PDF
Almost Sure Convergence of Proximal Stochastic Accelerated Gradient Methods
2
作者 Xin Xiang Haoming Xia 《Journal of Applied Mathematics and Physics》 2024年第4期1321-1336,共16页
Proximal gradient descent and its accelerated version are resultful methods for solving the sum of smooth and non-smooth problems. When the smooth function can be represented as a sum of multiple functions, the stocha... Proximal gradient descent and its accelerated version are resultful methods for solving the sum of smooth and non-smooth problems. When the smooth function can be represented as a sum of multiple functions, the stochastic proximal gradient method performs well. However, research on its accelerated version remains unclear. This paper proposes a proximal stochastic accelerated gradient (PSAG) method to address problems involving a combination of smooth and non-smooth components, where the smooth part corresponds to the average of multiple block sums. Simultaneously, most of convergence analyses hold in expectation. To this end, under some mind conditions, we present an almost sure convergence of unbiased gradient estimation in the non-smooth setting. Moreover, we establish that the minimum of the squared gradient mapping norm arbitrarily converges to zero with probability one. 展开更多
关键词 proximal Stochastic Accelerated method Almost Sure Convergence Composite Optimization Non-Smooth Optimization Stochastic Optimization Accelerated Gradient method
下载PDF
An Inexact Proximal Method with Proximal Distances for Quasimonotone Equilibrium Problems
3
作者 Lennin Mallma Ramirez Erik Alex Papa Quiroz P.R.Oliveira 《Journal of the Operations Research Society of China》 EI CSCD 2017年第4期545-561,共17页
In this paper,we propose an inexact proximal point method to solve equilibrium problems using proximal distances and the diagonal subdifferential.Under some natural assumptions on the problem and the quasimonotonicit... In this paper,we propose an inexact proximal point method to solve equilibrium problems using proximal distances and the diagonal subdifferential.Under some natural assumptions on the problem and the quasimonotonicity condition on the bifunction,we prove that the sequence generated by the method converges to a solution point of the problem. 展开更多
关键词 Equilibrium problems Quasimonotonicity proximal distance proximal method
原文传递
Proximal Methods with Bregman Distances to Solve VIP on Hadamard Manifolds with Null Sectional Curvature
4
作者 Erik Alex Papa Quiroz Paulo Roberto Oliveira 《Journal of the Operations Research Society of China》 EI CSCD 2021年第3期499-523,共25页
We present an extension of the proximal point method with Bregman distances to solve variational inequality problems(VIP)on Hadamard manifolds with null sectional curvature.Under some natural assumptions,as for exampl... We present an extension of the proximal point method with Bregman distances to solve variational inequality problems(VIP)on Hadamard manifolds with null sectional curvature.Under some natural assumptions,as for example,the existence of solutions of the VIP and the monotonicity of the multivalued vector field,we prove that the sequence of the iterates given by the method converges to a solution of the problem.Furthermore,this convergence is linear or superlinear with respect to the Bregman distance. 展开更多
关键词 proximal point methods Hadamard manifolds Bregman distances Variational inequality problems Monotone vector field
原文传递
A note on a family of proximal gradient methods for quasi-static incremental problems in elastoplastic analysis
5
作者 Yoshihiro Kanno 《Theoretical & Applied Mechanics Letters》 CAS CSCD 2020年第5期315-320,共6页
Accelerated proximal gradient methods have recently been developed for solving quasi-static incremental problems of elastoplastic analysis with some different yield criteria.It has been demonstrated through numerical ... Accelerated proximal gradient methods have recently been developed for solving quasi-static incremental problems of elastoplastic analysis with some different yield criteria.It has been demonstrated through numerical experiments that these methods can outperform conventional optimization-based approaches in computational plasticity.However,in literature these algorithms are described individually for specific yield criteria,and hence there exists no guide for application of the algorithms to other yield criteria.This short paper presents a general form of algorithm design,independent of specific forms of yield criteria,that unifies the existing proximal gradient methods.Clear interpretation is also given to each step of the presented general algorithm so that each update rule is linked to the underlying physical laws in terms of mechanical quantities. 展开更多
关键词 Elastoplastic analysis Incremental problem Nonsmooth convex optimization First-order optimization method proximal gradient method
下载PDF
On Optimal Sparse-Control Problems Governed by Jump-Diffusion Processes
6
作者 Beatrice Gaviraghi Andreas Schindele +1 位作者 Mario Annunziato Alfio Borzì 《Applied Mathematics》 2016年第16期1978-2004,共27页
A framework for the optimal sparse-control of the probability density function of a jump-diffusion process is presented. This framework is based on the partial integro-differential Fokker-Planck (FP) equation that gov... A framework for the optimal sparse-control of the probability density function of a jump-diffusion process is presented. This framework is based on the partial integro-differential Fokker-Planck (FP) equation that governs the time evolution of the probability density function of this process. In the stochastic process and, correspondingly, in the FP model the control function enters as a time-dependent coefficient. The objectives of the control are to minimize a discrete-in-time, resp. continuous-in-time, tracking functionals and its L2- and L1-costs, where the latter is considered to promote control sparsity. An efficient proximal scheme for solving these optimal control problems is considered. Results of numerical experiments are presented to validate the theoretical results and the computational effectiveness of the proposed control framework. 展开更多
关键词 Jump-Diffusion Processes Partial Integro-Differential Fokker-Planck Equation Optimal Control Theory Nonsmooth Optimization proximal methods
下载PDF
On the Linear Convergence of a Proximal Gradient Method for a Class of Nonsmooth Convex Minimization Problems 被引量:4
7
作者 Haibin Zhang Jiaojiao Jiang Zhi-Quan Luo 《Journal of the Operations Research Society of China》 EI 2013年第2期163-186,共24页
We consider a class of nonsmooth convex optimization problems where the objective function is the composition of a strongly convex differentiable function with a linear mapping,regularized by the sum of both l1-norm a... We consider a class of nonsmooth convex optimization problems where the objective function is the composition of a strongly convex differentiable function with a linear mapping,regularized by the sum of both l1-norm and l2-norm of the optimization variables.This class of problems arise naturally from applications in sparse group Lasso,which is a popular technique for variable selection.An effective approach to solve such problems is by the Proximal Gradient Method(PGM).In this paper we prove a local error bound around the optimal solution set for this problem and use it to establish the linear convergence of the PGM method without assuming strong convexity of the overall objective function. 展开更多
关键词 proximal gradient method Error bound Linear convergence Sparse group I asso
原文传递
HYBRID REGULARIZED CONE-BEAM RECONSTRUCTION FOR AXIALLY SYMMETRIC OBJECT TOMOGRAPHY
8
作者 李兴娥 魏素花 +1 位作者 许海波 陈冲 《Acta Mathematica Scientia》 SCIE CSCD 2022年第1期403-419,共17页
In this paper,we consider 3 D tomographic reconstruction for axially symmetric objects from a single radiograph formed by cone-beam X-rays.All contemporary density reconstruction methods in high-energy X-ray radiograp... In this paper,we consider 3 D tomographic reconstruction for axially symmetric objects from a single radiograph formed by cone-beam X-rays.All contemporary density reconstruction methods in high-energy X-ray radiography are based on the assumption that the cone beam can be treated as fan beams located at parallel planes perpendicular to the symmetric axis,so that the density of the whole object can be recovered layer by layer.Considering the relationship between different layers,we undertake the cone-beam global reconstruction to solve the ambiguity effect at the material interfaces of the reconstruction results.In view of the anisotropy of classical discrete total variations,a new discretization of total variation which yields sharp edges and has better isotropy is introduced in our reconstruction model.Furthermore,considering that the object density consists of continually changing parts and jumps,a high-order regularization term is introduced.The final hybrid regularization model is solved using the alternating proximal gradient method,which was recently applied in image processing.Density reconstruction results are presented for simulated radiographs,which shows that the proposed method has led to an improvement in terms of the preservation of edge location. 展开更多
关键词 high-energy X-ray radiography cone-beam global reconstruction inverse problem total variation alternating proximal gradient method
下载PDF
Inexact Proximal Point Methods for Quasiconvex Minimization on Hadamard Manifolds
9
作者 Nancy Baygorrea Erik Alex Papa Quiroz Nelson Maculan 《Journal of the Operations Research Society of China》 EI CSCD 2016年第4期397-424,共28页
In this paper we present two inexact proximal point algorithms to solve minimization problems for quasiconvex objective functions on Hadamard manifolds.We prove that under natural assumptions the sequence generated by... In this paper we present two inexact proximal point algorithms to solve minimization problems for quasiconvex objective functions on Hadamard manifolds.We prove that under natural assumptions the sequence generated by the algorithms are well defined and converge to critical points of the problem.We also present an application of the method to demand theory in economy. 展开更多
关键词 proximal point method Quasiconvex function Hadamard manifolds Nonsmooth optimization Abstract subdifferential
原文传递
Proximal-Based Pre-correction Decomposition Methods for Structured Convex Minimization Problems
10
作者 Yuan-Yuan Huang San-Yang Liu 《Journal of the Operations Research Society of China》 EI 2014年第2期223-235,共13页
This paper presents two proximal-based pre-correction decomposition methods for convex minimization problems with separable structures.The methods,derived from Chen and Teboulle’s proximal-based decomposition method ... This paper presents two proximal-based pre-correction decomposition methods for convex minimization problems with separable structures.The methods,derived from Chen and Teboulle’s proximal-based decomposition method and He’s parallel splitting augmented Lagrangian method,remain the nice convergence property of the proximal point method and could compute variables in parallel like He’s method under the prediction-correction framework.Convergence results are established without additional assumptions.And the efficiency of the proposed methods is illustrated by some preliminary numerical experiments. 展开更多
关键词 Structured convex programming Parallel splitting proximal point method Augmented Lagrangian Prediction-correction method
原文传递
A Modified Proximal Gradient Method for a Family of Nonsmooth Convex Optimization Problems
11
作者 Ying-Yi Li Hai-Bin Zhang Fei Li 《Journal of the Operations Research Society of China》 EI CSCD 2017年第3期391-403,共13页
In this paper,we propose a modified proximal gradient method for solving a class of nonsmooth convex optimization problems,which arise in many contemporary statistical and signal processing applications.The proposed m... In this paper,we propose a modified proximal gradient method for solving a class of nonsmooth convex optimization problems,which arise in many contemporary statistical and signal processing applications.The proposed method adopts a new scheme to construct the descent direction based on the proximal gradient method.It is proven that the modified proximal gradient method is Q-linearly convergent without the assumption of the strong convexity of the objective function.Some numerical experiments have been conducted to evaluate the proposed method eventually. 展开更多
关键词 Nonsmooth convex optimization Modified proximal gradient method Q-linear convergence
原文传递
On the Linear Convergence of the Approximate Proximal Splitting Method for Non-smooth Convex Optimization
12
作者 Mojtaba Kadkhodaie Maziar Sanjabi Zhi-Quan Luo 《Journal of the Operations Research Society of China》 EI 2014年第2期123-141,共19页
Consider the problem of minimizing the sum of two convex functions,one being smooth and the other non-smooth.In this paper,we introduce a general class of approximate proximal splitting(APS)methods for solving such mi... Consider the problem of minimizing the sum of two convex functions,one being smooth and the other non-smooth.In this paper,we introduce a general class of approximate proximal splitting(APS)methods for solving such minimization problems.Methods in the APS class include many well-known algorithms such as the proximal splitting method,the block coordinate descent method(BCD),and the approximate gradient projection methods for smooth convex optimization.We establish the linear convergence of APS methods under a local error bound assumption.Since the latter is known to hold for compressive sensing and sparse group LASSO problems,our analysis implies the linear convergence of the BCD method for these problems without strong convexity assumption. 展开更多
关键词 Convex optimization proximal splitting method Block coordinate descent method Convergence rate analysis Local error bound
原文传递
Relaxed inertial proximal Peaceman-Rachford splitting method for separable convex programming
13
作者 Yongguang HE Huiyun LI Xinwei LIU 《Frontiers of Mathematics in China》 SCIE CSCD 2018年第3期555-578,共24页
The strictly contractive Peaceman-Rachford splitting method is one of effective methods for solving separable convex optimization problem, and the inertial proximal Peaceman-Rachford splitting method is one of its imp... The strictly contractive Peaceman-Rachford splitting method is one of effective methods for solving separable convex optimization problem, and the inertial proximal Peaceman-Rachford splitting method is one of its important variants. It is known that the convergence of the inertial proximal Peaceman- Rachford splitting method can be ensured if the relaxation factor in Lagrangian multiplier updates is underdetermined, which means that the steps for the Lagrangian multiplier updates are shrunk conservatively. Although small steps play an important role in ensuring convergence, they should be strongly avoided in practice. In this article, we propose a relaxed inertial proximal Peaceman- Rachford splitting method, which has a larger feasible set for the relaxation factor. Thus, our method provides the possibility to admit larger steps in the Lagrangian multiplier updates. We establish the global convergence of the proposed algorithm under the same conditions as the inertial proximal Peaceman-Rachford splitting method. Numerical experimental results on a sparse signal recovery problem in compressive sensing and a total variation based image denoising problem demonstrate the effectiveness of our method. 展开更多
关键词 Convex programming inertial proximal Peaceman-Rachford splitting method relaxation factor global convergence
原文传递
PAPR Reduction in Massive MU-MIMO-OFDM Systems Using the Proximal Gradient Method
14
作者 Davinder Singh R.K.Sarin 《Journal of Communications and Information Networks》 CSCD 2019年第1期88-94,共7页
In this paper,we address the issue of peak-to-average power ratio(PAPR)reduction in large-scale multiuser multiple-input multiple-output(MU-MIMO)orthogonal frequency-division multiplexing(OFDM)systems.PAPR reduction a... In this paper,we address the issue of peak-to-average power ratio(PAPR)reduction in large-scale multiuser multiple-input multiple-output(MU-MIMO)orthogonal frequency-division multiplexing(OFDM)systems.PAPR reduction and the multiuser interference(MUI)cancellation problem are jointly formulated as an l_(∞)-norm based composite convex optimization problem,which can be solved efficiently using the iterative proximal gradient method.The proximal operator associated with l_(∞)-norm is evaluated using a low-cost sorting algorithm.The proposed method adaptively chooses the step size to accelerate convergence.Simulation results reveal that the proximal gradient method converges swiftly while provid-ing considerable PAPR reduction and lower out-of-band radiation. 展开更多
关键词 OFDM MU-MIMO PAPR reduction proximal operator proximal gradient method
原文传递
Proximity point algorithm for low-rank matrix recovery from sparse noise corrupted data
15
作者 朱玮 舒适 成礼智 《Applied Mathematics and Mechanics(English Edition)》 SCIE EI 2014年第2期259-268,共10页
The method of recovering a low-rank matrix with an unknown fraction whose entries are arbitrarily corrupted is known as the robust principal component analysis (RPCA). This RPCA problem, under some conditions, can b... The method of recovering a low-rank matrix with an unknown fraction whose entries are arbitrarily corrupted is known as the robust principal component analysis (RPCA). This RPCA problem, under some conditions, can be exactly solved via convex optimization by minimizing a combination of the nuclear norm and the 11 norm. In this paper, an algorithm based on the Douglas-Rachford splitting method is proposed for solving the RPCA problem. First, the convex optimization problem is solved by canceling the constraint of the variables, and ~hen the proximity operators of the objective function are computed alternately. The new algorithm can exactly recover the low-rank and sparse components simultaneously, and it is proved to be convergent. Numerical simulations demonstrate the practical utility of the proposed algorithm. 展开更多
关键词 low-rank matrix recovery sparse noise Douglas-Rachford splitting method proximity operator
下载PDF
A LQP BASED INTERIOR PREDICTION-CORRECTION METHOD FOR NONLINEAR COMPLEMENTARITY PROBLEMS 被引量:5
16
作者 Bing-sheng He Li-zhi Liao Xiao-ming Yuan 《Journal of Computational Mathematics》 SCIE CSCD 2006年第1期33-44,共12页
To solve nonlinear complementarity problems (NCP), at each iteration, the classical proximal point algorithm solves a well-conditioned sub-NCP while the Logarithmic-Quadratic Proximal (LQP) method solves a system ... To solve nonlinear complementarity problems (NCP), at each iteration, the classical proximal point algorithm solves a well-conditioned sub-NCP while the Logarithmic-Quadratic Proximal (LQP) method solves a system of nonlinear equations (LQP system). This paper presents a practical LQP method-based prediction-correction method for NCP. The predictor is obtained via solving the LQP system approximately under significantly relaxed restriction, and the new iterate (the corrector) is computed directly by an explicit formula derived from the original LQP method. The implementations are very easy to be carried out. Global convergence of the method is proved under the same mild assumptions as the original LQP method. Finally, numerical results for traffic equilibrium problems are provided to verify that the method is effective for some practical problems. 展开更多
关键词 Logarithmic-Quadratic proximal method Nonlinear complementarity problems Prediction-correction Inexact criterion
原文传递
FINITE PROXIMATE METHOD FOR CONVECTION-DIFFUSION EQUATION 被引量:9
17
作者 ZHAO Ming-deng LI Tai-ru HUAI Wen-xin LI Liang-liang 《Journal of Hydrodynamics》 SCIE EI CSCD 2008年第1期47-53,共7页
A finite proximate method was presented to solve the convection-diffusion equation in curvilinear grids. The method has characteristics of automatic upwind effect and the good stability. It was verified through exact ... A finite proximate method was presented to solve the convection-diffusion equation in curvilinear grids. The method has characteristics of automatic upwind effect and the good stability. It was verified through exact solution and other calculation results of two-dimensional dam-break flow in a frictionless, horizontal channel. The calculation results are in good agreement with the exact solution and other calculation results, which show that the finite proximate method can be applied to solve the convection-diffusion equation directly not only in the rectangular grids, but also in the curvilinear grids. 展开更多
关键词 convection-diffusion equation finite proximate method dam-break flow
原文传递
On the Convergence Rate of an Inexact Proximal Point Algorithm for Quasiconvex Minimization on Hadamard Manifolds
18
作者 Nancy Baygorrea Erik Alex Papa Quiroz Nelson Maculan 《Journal of the Operations Research Society of China》 EI CSCD 2017年第4期457-467,共11页
In this paper,we present an analysis about the rate of convergence of an inexact proximal point algorithm to solve minimization problems for quasiconvex objective functions on Hadamard manifolds.We prove that under na... In this paper,we present an analysis about the rate of convergence of an inexact proximal point algorithm to solve minimization problems for quasiconvex objective functions on Hadamard manifolds.We prove that under natural assumptions the sequence generated by the algorithm converges linearly or superlinearly to a critical point of the problem. 展开更多
关键词 proximal point method Quasiconvex function Hadamard manifolds Nonsmooth optimization Abstract subdifferential Convergence rate
原文传递
On Globally Q-Linear Convergence of a Splitting Method for Group Lasso
19
作者 Yun-Da Dong Hai-Bin Zhang Huan Gao 《Journal of the Operations Research Society of China》 EI CSCD 2018年第3期445-454,共10页
In this paper,we discuss a splitting method for group Lasso.By assuming that the sequence of the step lengths has positive lower bound and positive upper bound(unrelated to the given problem data),we prove its Q-linea... In this paper,we discuss a splitting method for group Lasso.By assuming that the sequence of the step lengths has positive lower bound and positive upper bound(unrelated to the given problem data),we prove its Q-linear rate of convergence of the distance sequence of the iterates to the solution set.Moreover,we make comparisons with convergence of the proximal gradient method analyzed very recently. 展开更多
关键词 Group Lasso Splitting method proximal gradient method Q-linear rate of convergence
原文传递
A HOMOTOPY-BASED ALTERNATING DIRECTION METHOD OF MULTIPLIERS FOR STRUCTURED CONVEX OPTIMIZATION
20
作者 Yiqing Dai Zheng Peng 《Annals of Applied Mathematics》 2015年第3期262-273,共12页
The alternating direction method of multipliers (ADMM for short) is efficient for linearly constrained convex optimization problem. The practicM computationM cost of ADMM depends on the sub-problem solvers. The prox... The alternating direction method of multipliers (ADMM for short) is efficient for linearly constrained convex optimization problem. The practicM computationM cost of ADMM depends on the sub-problem solvers. The proximal point algorithm is a common sub-problem-solver. However, the proximal parameter is sensitive in the proximM ADMM. In this paper, we propose a homotopy-based proximal linearized ADMM, in which a homotopy method is used to soNe the sub-problems at each iteration. Under some suitable conditions, the global convergence and the convergence rate of O(1/k) in the worst case of the proposed method are proven. Some preliminary numerical results indicate the validity of the proposed method. 展开更多
关键词 separable convex optimization alternating direction method of mul-tipliers proximal point method homotopy method
原文传递
上一页 1 2 下一页 到第
使用帮助 返回顶部