期刊文献+
共找到23篇文章
< 1 2 >
每页显示 20 50 100
A Primal-Dual SGD Algorithm for Distributed Nonconvex Optimization 被引量:4
1
作者 Xinlei Yi Shengjun Zhang +2 位作者 Tao Yang Tianyou Chai Karl Henrik Johansson 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2022年第5期812-833,共22页
The distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of n local cost functions by using local information exchange is considered.This problem is an important component of... The distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of n local cost functions by using local information exchange is considered.This problem is an important component of many machine learning techniques with data parallelism,such as deep learning and federated learning.We propose a distributed primal-dual stochastic gradient descent(SGD)algorithm,suitable for arbitrarily connected communication networks and any smooth(possibly nonconvex)cost functions.We show that the proposed algorithm achieves the linear speedup convergence rate O(1/(√nT))for general nonconvex cost functions and the linear speedup convergence rate O(1/(nT)) when the global cost function satisfies the Polyak-Lojasiewicz(P-L)condition,where T is the total number of iterations.We also show that the output of the proposed algorithm with constant parameters linearly converges to a neighborhood of a global optimum.We demonstrate through numerical experiments the efficiency of our algorithm in comparison with the baseline centralized SGD and recently proposed distributed SGD algorithms. 展开更多
关键词 Distributed nonconvex optimization linear speedup Polyak-Lojasiewicz(P-L)condition primal-dual algorithm stochastic gradient descent
下载PDF
Improved nonconvex optimization model for low-rank matrix recovery 被引量:1
2
作者 李玲芝 邹北骥 朱承璋 《Journal of Central South University》 SCIE EI CAS CSCD 2015年第3期984-991,共8页
Low-rank matrix recovery is an important problem extensively studied in machine learning, data mining and computer vision communities. A novel method is proposed for low-rank matrix recovery, targeting at higher recov... Low-rank matrix recovery is an important problem extensively studied in machine learning, data mining and computer vision communities. A novel method is proposed for low-rank matrix recovery, targeting at higher recovery accuracy and stronger theoretical guarantee. Specifically, the proposed method is based on a nonconvex optimization model, by solving the low-rank matrix which can be recovered from the noisy observation. To solve the model, an effective algorithm is derived by minimizing over the variables alternately. It is proved theoretically that this algorithm has stronger theoretical guarantee than the existing work. In natural image denoising experiments, the proposed method achieves lower recovery error than the two compared methods. The proposed low-rank matrix recovery method is also applied to solve two real-world problems, i.e., removing noise from verification code and removing watermark from images, in which the images recovered by the proposed method are less noisy than those of the two compared methods. 展开更多
关键词 machine learning computer vision matrix recovery nonconvex optimization
下载PDF
Adaptive sparse and dense hybrid representation with nonconvex optimization
3
作者 Xuejun WANG Feilong CAO Wenjian WANG 《Frontiers of Computer Science》 SCIE EI CSCD 2020年第4期65-78,共14页
Sparse representation has been widely used in signal processing,pattern recognition and computer vision etc.Excellent achievements have been made in both theoretical researches and practical applications.However,there... Sparse representation has been widely used in signal processing,pattern recognition and computer vision etc.Excellent achievements have been made in both theoretical researches and practical applications.However,there are two limitations on the application of classification.One is that sufficient training samples are required for each class,and the other is that samples should be uncorrupted.In order to alleviate above problems,a sparse and dense hybrid representation(SDR)framework has been proposed,where the training dictionary is decomposed into a class-specific dictionary and a non-class-specific dictionary.SDR putsℓ1 constraint on the coefficients of class-specific dictionary.Nevertheless,it over-emphasizes the sparsity and overlooks the correlation information in class-specific dictionary,which may lead to poor classification results.To overcome this disadvantage,an adaptive sparse and dense hybrid representation with non-convex optimization(ASDR-NO)is proposed in this paper.The trace norm is adopted in class-specific dictionary,which is different from general approaches.By doing so,the dictionary structure becomes adaptive and the representation ability of the dictionary will be improved.Meanwhile,a non-convex surrogate is used to approximate the rank function in dictionary decomposition in order to avoid a suboptimal solution of the original rank minimization,which can be solved by iteratively reweighted nuclear norm(IRNN)algorithm.Extensive experiments conducted on benchmark data sets have verified the effectiveness and advancement of the proposed algorithm compared with the state-of-the-art sparse representation methods. 展开更多
关键词 sparse representation trace norm nonconvex optimization low rank matrix recovery iteratively reweighted nuclear norm
原文传递
Convergence of Bregman Alternating Direction Method of Multipliers for Nonseparable Nonconvex Objective with Linear Constraints
4
作者 Xiaotong Zeng Junping Yao Haoming Xia 《Journal of Applied Mathematics and Physics》 2024年第2期639-660,共22页
In this paper, our focus lies on addressing a two-block linearly constrained nonseparable nonconvex optimization problem with coupling terms. The most classical algorithm, the alternating direction method of multiplie... In this paper, our focus lies on addressing a two-block linearly constrained nonseparable nonconvex optimization problem with coupling terms. The most classical algorithm, the alternating direction method of multipliers (ADMM), is employed to solve such problems typically, which still requires the assumption of the gradient Lipschitz continuity condition on the objective function to ensure overall convergence from the current knowledge. However, many practical applications do not adhere to the conditions of smoothness. In this study, we justify the convergence of variant Bregman ADMM for the problem with coupling terms to circumvent the issue of the global Lipschitz continuity of the gradient. We demonstrate that the iterative sequence generated by our approach converges to a critical point of the issue when the corresponding function fulfills the Kurdyka-Lojasiewicz inequality and certain assumptions apply. In addition, we illustrate the convergence rate of the algorithm. 展开更多
关键词 Nonseparable nonconvex optimization Bregman ADMM Kurdyka-Lojasiewicz Inequality
下载PDF
Generalized Nonconvex Low-Rank Algorithm for Magnetic Resonance Imaging (MRI) Reconstruction
5
作者 吴新峰 刘且根 +2 位作者 卢红阳 龙承志 王玉皞 《Journal of Donghua University(English Edition)》 EI CAS 2017年第2期316-321,共6页
In recent years,utilizing the low-rank prior information to construct a signal from a small amount of measures has attracted much attention.In this paper,a generalized nonconvex low-rank(GNLR) algorithm for magnetic r... In recent years,utilizing the low-rank prior information to construct a signal from a small amount of measures has attracted much attention.In this paper,a generalized nonconvex low-rank(GNLR) algorithm for magnetic resonance imaging(MRI)reconstruction is proposed,which reconstructs the image from highly under-sampled k-space data.In the algorithm,the nonconvex surrogate function replacing the conventional nuclear norm is utilized to enhance the low-rank property inherent in the reconstructed image.An alternative direction multiplier method(ADMM) is applied to solving the resulting non-convex model.Extensive experimental results have demonstrated that the proposed method can consistently recover MRIs efficiently,and outperforms the current state-of-the-art approaches in terms of higher peak signal-to-noise ratio(PSNR) and lower high-frequency error norm(HFEN) values. 展开更多
关键词 magnetic resonance imaging(MRI) low-rank approximation nonconvex optimization alternative direction multiplier method(ADMM)
下载PDF
Convergence of ADMM for multi-block nonconvex separable optimization models 被引量:14
6
作者 Ke GUO Deren HAN +1 位作者 David Z. W. WANG Tingting WU 《Frontiers of Mathematics in China》 SCIE CSCD 2017年第5期1139-1162,共24页
For solving minimization problems whose objective function is the sum of two functions without coupled variables and the constrained function is linear, the alternating direction method of multipliers (ADMM) has exh... For solving minimization problems whose objective function is the sum of two functions without coupled variables and the constrained function is linear, the alternating direction method of multipliers (ADMM) has exhibited its efficiency and its convergence is well understood. When either the involved number of separable functions is more than two, or there is a nonconvex function~ ADMM or its direct extended version may not converge. In this paper, we consider the multi-block sepa.rable optimization problems with linear constraints and absence of convexity of the involved component functions. Under the assumption that the associated function satisfies the Kurdyka- Lojasiewicz inequality, we prove that any cluster point of the iterative sequence generated by ADMM is a critical point, under the mild condition that the penalty parameter is sufficiently large. We also present some sufficient conditions guaranteeing the sublinear and linear rate of convergence of the algorithm. 展开更多
关键词 nonconvex optimization separable structure alternating directionmethod of rnultip!iers (.ADMM) Kurdyka-Lojasiewicz inequality
原文传递
Online blind source separation based on joint diagonalization 被引量:2
7
作者 Li Ronghua Zhou Guoxu Yang Zuyuan Xie Shengli 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2009年第2期229-233,共5页
A new algorithm is proposed for joint diagonalization. With a modified objective function, the new algorithm not only excludes trivial and unbalanced solutions successfully, but is also easily optimized. In addition, ... A new algorithm is proposed for joint diagonalization. With a modified objective function, the new algorithm not only excludes trivial and unbalanced solutions successfully, but is also easily optimized. In addition, with the new objective function, the proposed algorithm can work well in online blind source separation (BSS) for the first time, although this family of algorithms is always thought to be valid only in batch-mode BSS by far. Simulations show that it is a very competitive joint diagonalization algorithm. 展开更多
关键词 blind source separation joint diagonalization nonconvex optimization
下载PDF
Global optimality conditions for quadratic 0-1 programming with inequality constraints 被引量:1
8
作者 张连生 陈伟 姚奕荣 《Journal of Shanghai University(English Edition)》 CAS 2010年第2期150-154,共5页
Quadratic 0-1 problems with linear inequality constraints are briefly considered in this paper.Global optimality conditions for these problems,including a necessary condition and some sufficient conditions,are present... Quadratic 0-1 problems with linear inequality constraints are briefly considered in this paper.Global optimality conditions for these problems,including a necessary condition and some sufficient conditions,are presented.The necessary condition is expressed without dual variables.The relations between the global optimal solutions of nonconvex quadratic 0-1 problems and the associated relaxed convex problems are also studied. 展开更多
关键词 quadratic 0-1 programming optimality condition nonconvex optimization integer programming convex duality
下载PDF
A Bregman-style Partially Symmetric Alternating Direction Method of Multipliers for Nonconvex Multi-block Optimization
9
作者 Peng-jie LIU Jin-bao JIAN Guo-dong MA 《Acta Mathematicae Applicatae Sinica》 SCIE CSCD 2023年第2期354-380,共27页
The alternating direction method of multipliers(ADMM)is one of the most successful and powerful methods for separable minimization optimization.Based on the idea of symmetric ADMM in two-block optimization,we add an u... The alternating direction method of multipliers(ADMM)is one of the most successful and powerful methods for separable minimization optimization.Based on the idea of symmetric ADMM in two-block optimization,we add an updating formula for the Lagrange multiplier without restricting its position for multiblock one.Then,combining with the Bregman distance,in this work,a Bregman-style partially symmetric ADMM is presented for nonconvex multi-block optimization with linear constraints,and the Lagrange multiplier is updated twice with different relaxation factors in the iteration scheme.Under the suitable conditions,the global convergence,strong convergence and convergence rate of the presented method are analyzed and obtained.Finally,some preliminary numerical results are reported to support the correctness of the theoretical assertions,and these show that the presented method is numerically effective. 展开更多
关键词 nonconvex optimization multi-block optimization alternating direction method with multipliers Kurdyka-Lojasiewicz property convergence rate
原文传递
一类非凸优化问题的邻近拟牛顿方法的复杂性
10
作者 金玲子 《Chinese Quarterly Journal of Mathematics》 2023年第1期62-84,共23页
This paper studies a class of nonconvex composite optimization, whose objective is a summation of an average of nonconvex(weakly) smooth functions and a convex nonsmooth function, where the gradient of the former func... This paper studies a class of nonconvex composite optimization, whose objective is a summation of an average of nonconvex(weakly) smooth functions and a convex nonsmooth function, where the gradient of the former function has the H o¨lder continuity. By exploring the structure of such kind of problems, we first propose a proximal(quasi-)Newton algorithm wPQN(Proximal quasi-Newton algorithm for weakly smooth optimization) and investigate its theoretical complexities to find an approximate solution. Then we propose a stochastic variant algorithm wPSQN(Proximal stochastic quasi-Newton algorithm for weakly smooth optimization), which allows a random subset of component functions to be used at each iteration. Moreover, motivated by recent success of variance reduction techniques, we propose two variance reduced algorithms,wPSQN-SVRG and wPSQN-SARAH, and investigate their computational complexity separately. 展开更多
关键词 nonconvex optimization Nonsmooth optimization Holder continuity Proximal quasi-Newton Variance reduction
下载PDF
L1/2 -Regularized Quantile Method for Sparse Phase Retrieval
11
作者 Si Shen Jiayao Xiang +1 位作者 Huijuan Lv Ailing Yan 《Open Journal of Applied Sciences》 CAS 2022年第12期2135-2151,共17页
The sparse phase retrieval aims to recover the sparse signal from quadratic measurements. However, the measurements are often affected by outliers and asymmetric distribution noise. This paper introduces a novel metho... The sparse phase retrieval aims to recover the sparse signal from quadratic measurements. However, the measurements are often affected by outliers and asymmetric distribution noise. This paper introduces a novel method that combines the quantile regression and the L<sub>1/2</sub>-regularizer. It is a non-convex, non-smooth, non-Lipschitz optimization problem. We propose an efficient algorithm based on the Alternating Direction Methods of Multiplier (ADMM) to solve the corresponding optimization problem. Numerous numerical experiments show that this method can recover sparse signals with fewer measurements and is robust to dense bounded noise and Laplace noise. 展开更多
关键词 Sparse Phase Retrieval nonconvex optimization Alternating Direction Method of Multipliers Quantile Regression Model ROBUSTNESS
下载PDF
An Efficient Adaptive Iteratively Reweighted l1 Algorithm for Elastic lq Regularization
12
作者 Yong Zhang Wanzhou Ye 《Advances in Pure Mathematics》 2016年第7期498-506,共9页
In this paper, we propose an efficient adaptive iteratively reweighted l<sub>1</sub> algorithm (A-IRL1 algorithm) for solving the elastic l<sub>q</sub> regularization problem. We prove that the... In this paper, we propose an efficient adaptive iteratively reweighted l<sub>1</sub> algorithm (A-IRL1 algorithm) for solving the elastic l<sub>q</sub> regularization problem. We prove that the sequence generated by the A-IRL1 algorithm is convergent for any rational and the limit is a critical point of the elastic l<sub>q</sub> regularization problem. Under certain conditions, we present an error bound for the limit point of convergent sequence. 展开更多
关键词 Compressed Sensing Elastic style="font-family:Mistral font-size:20pt ">lq Minimization nonconvex optimization Convergence Critical Point
下载PDF
A Bregman-Style Improved ADMM and its Linearized Version in the Nonconvex Setting:Convergence and Rate Analyses
13
作者 Peng-Jie Liu Jin-Bao Jian +3 位作者 Hu Shao Xiao-Quan Wang Jia-Wei Xu Xiao-Yu Wu 《Journal of the Operations Research Society of China》 EI CSCD 2024年第2期298-340,共43页
This work explores a family of two-block nonconvex optimization problems subject to linear constraints.We first introduce a simple but universal Bregman-style improved alternating direction method of multipliers(ADMM)... This work explores a family of two-block nonconvex optimization problems subject to linear constraints.We first introduce a simple but universal Bregman-style improved alternating direction method of multipliers(ADMM)based on the iteration framework of ADMM and the Bregman distance.Then,we utilize the smooth performance of one of the components to develop a linearized version of it.Compared to the traditional ADMM,both proposed methods integrate a convex combination strategy into the multiplier update step.For each proposed method,we demonstrate the convergence of the entire iteration sequence to a unique critical point of the augmented Lagrangian function utilizing the powerful Kurdyka–Łojasiewicz property,and we also derive convergence rates for both the sequence of merit function values and the iteration sequence.Finally,some numerical results show that the proposed methods are effective and encouraging for the Lasso model. 展开更多
关键词 nonconvex optimization Alternating direction method of multipliers Kurdyka-Lojasiewicz property Convergence rate
原文传递
Convergence of Bregman Peaceman–Rachford Splitting Method for Nonconvex Nonseparable Optimization 被引量:1
14
作者 Peng-Jie Liu Jin-Bao Jian +1 位作者 Bo He Xian-Zhen Jiang 《Journal of the Operations Research Society of China》 EI CSCD 2023年第4期707-733,共27页
This work is about a splitting method for solving a nonconvex nonseparable optimization problem with linear constraints,where the objective function consists of two separable functions and a coupled term.First,based o... This work is about a splitting method for solving a nonconvex nonseparable optimization problem with linear constraints,where the objective function consists of two separable functions and a coupled term.First,based on the ideas from Bregman distance and Peaceman–Rachford splitting method,the Bregman Peaceman–Rachford splitting method with different relaxation factors for the multiplier is proposed.Second,the global and strong convergence of the proposed algorithm are proved under general conditions including the region of the two relaxation factors as well as the crucial Kurdyka–Łojasiewicz property.Third,when the associated Kurdyka–Łojasiewicz property function has a special structure,the sublinear and linear convergence rates of the proposed algorithm are guaranteed.Furthermore,some preliminary numerical results are shown to indicate the effectiveness of the proposed algorithm. 展开更多
关键词 nonconvex nonseparable optimization Peaceman-Rachford splitting method Bregman distance Kurdyka-Łojasiewicz inequality Convergence rate
原文传递
Two-stage robust power cost minimization in a natural gas compressor station
15
作者 Yize Meng Ruoran Chen +1 位作者 Keren Zhang Tianhu Deng 《Petroleum Science》 SCIE CAS CSCD 2022年第1期409-428,共20页
Optimal operation of a compressor station is important since it accounts for 25%to 50%of a company’s total operating budget.In short-term management of a compressor station,handling demand uncertainty is important ye... Optimal operation of a compressor station is important since it accounts for 25%to 50%of a company’s total operating budget.In short-term management of a compressor station,handling demand uncertainty is important yet challenging.Previous studies either require precise information about the distribution of uncertain parameters or greatly simplify the compressor model.We build a two-stage robust optimization framework of power cost minimization in a natural gas compressor station with nonidentical compressors.In the first stage,decision variables are the ON/OFF state of each compressor and discharge pressure.The worst-case cost of the second stage is incorporated in the first stage.Firststage decision variables feasibility is discussed and proper feasibility cuts are also proposed for the first stage.We employ a piece-wise approximation and propose accelerate methods.Our numerical results highlight two advantages of robust approach when managing uncertainty in practical settings:(1)the feasibility of first-stage decision can be increased by up to 45%,and(2)the worst-case cost can be reduced by up to 25%compared with stochastic programming models.Furthermore,our numerical experiments show that the designed accelerate algorithm has time improvements of 1518.9%on average(3785.9%at maximum). 展开更多
关键词 Natural gas Single station power minimization nonconvex robust optimization C&CG algorithm
下载PDF
Nonconvex Sorted l1 Minimization for Sparse Approximation 被引量:1
16
作者 Xiao-Lin Huang Lei Shi Ming Yan 《Journal of the Operations Research Society of China》 EI CSCD 2015年第2期207-229,共23页
The l1 norm is the tight convex relaxation for the l0 norm and has been successfully applied for recovering sparse signals.However,for problems with fewer samples than required for accurate l1 recovery,one needs to ap... The l1 norm is the tight convex relaxation for the l0 norm and has been successfully applied for recovering sparse signals.However,for problems with fewer samples than required for accurate l1 recovery,one needs to apply nonconvex penalties such as lp norm.As one method for solving lp minimization problems,iteratively reweighted l1 minimization updates the weight for each component based on the value of the same component at the previous iteration.It assigns large weights on small components in magnitude and small weights on large components in magnitude.The set of the weights is not fixed,and it makes the analysis of this method difficult.In this paper,we consider a weighted l1 penalty with the set of the weights fixed,and the weights are assigned based on the sort of all the components in magnitude.The smallest weight is assigned to the largest component in magnitude.This new penalty is called nonconvex sorted l1.Then we propose two methods for solving nonconvex sorted l1 minimization problems:iteratively reweighted l1 minimization and iterative sorted thresholding,and prove that both methods will converge to a local minimizer of the nonconvex sorted l1 minimization problems.We also show that both methods are generalizations of iterative support detection and iterative hard thresholding,respectively.The numerical experiments demonstrate the better performance of assigning weights by sort compared to assigning by value. 展开更多
关键词 Iteratively reweighted1 minimization Iterative sorted thresholding Local minimizer nonconvex optimization Sparse approximation
原文传递
Convergence of Gradient Algorithms for Nonconvex C^(1+α) Cost Functions
17
作者 Zixuan WANG Shanjian TANG 《Chinese Annals of Mathematics,Series B》 SCIE CSCD 2023年第3期445-464,共20页
This paper is concerned with convergence of stochastic gradient algorithms with momentum terms in the nonconvex setting.A class of stochastic momentum methods,including stochastic gradient descent,heavy ball and Neste... This paper is concerned with convergence of stochastic gradient algorithms with momentum terms in the nonconvex setting.A class of stochastic momentum methods,including stochastic gradient descent,heavy ball and Nesterov’s accelerated gradient,is analyzed in a general framework under mild assumptions.Based on the convergence result of expected gradients,the authors prove the almost sure convergence by a detailed discussion of the effects of momentum and the number of upcrossings.It is worth noting that there are not additional restrictions imposed on the objective function and stepsize.Another improvement over previous results is that the existing Lipschitz condition of the gradient is relaxed into the condition of H?lder continuity.As a byproduct,the authors apply a localization procedure to extend the results to stochastic stepsizes. 展开更多
关键词 Gradient descent methods nonconvex optimization Accelerated gradient descent Heavy-ball momentum
原文传递
Generalized Lagrangian Duality in Set-valued Vector Optimization via Abstract Subdifferential
18
作者 Yan-fei CHAI San-yang LIU Si-qi WANG 《Acta Mathematicae Applicatae Sinica》 SCIE CSCD 2022年第2期337-351,共15页
In this paper,we investigate dual problems for nonconvex set-valued vector optimization via abstract subdifferential.We first introduce a generalized augmented Lagrangian function induced by a coupling vector-valued f... In this paper,we investigate dual problems for nonconvex set-valued vector optimization via abstract subdifferential.We first introduce a generalized augmented Lagrangian function induced by a coupling vector-valued function for set-valued vector optimization problem and construct related set-valued dual map and dual optimization problem on the basic of weak efficiency,which used by the concepts of supremum and infimum of a set.We then establish the weak and strong duality results under this augmented Lagrangian and present sufficient conditions for exact penalization via an abstract subdifferential of the object map.Finally,we define the sub-optimal path related to the dual problem and show that every cluster point of this sub-optimal path is a primal optimal solution of the object optimization problem.In addition,we consider a generalized vector variational inequality as an application of abstract subdifferential. 展开更多
关键词 nonconvex set-valued vector optimization abstract subdifferential generalized augmented Lagrangian duality exact penalization sub-optimal path
原文传递
Extrapolated Smoothing Descent Algorithm for Constrained Nonconvex and Nonsmooth Composite Problems
19
作者 Yunmei CHEN Hongcheng LIU Weina WANG 《Chinese Annals of Mathematics,Series B》 SCIE CSCD 2022年第6期1049-1070,共22页
In this paper,the authors propose a novel smoothing descent type algorithm with extrapolation for solving a class of constrained nonsmooth and nonconvex problems,where the nonconvex term is possibly nonsmooth.Their al... In this paper,the authors propose a novel smoothing descent type algorithm with extrapolation for solving a class of constrained nonsmooth and nonconvex problems,where the nonconvex term is possibly nonsmooth.Their algorithm adopts the proximal gradient algorithm with extrapolation and a safe-guarding policy to minimize the smoothed objective function for better practical and theoretical performance.Moreover,the algorithm uses a easily checking rule to update the smoothing parameter to ensure that any accumulation point of the generated sequence is an(afne-scaled)Clarke stationary point of the original nonsmooth and nonconvex problem.Their experimental results indicate the effectiveness of the proposed algorithm. 展开更多
关键词 Constrained nonconvex and nonsmooth optimization Smooth approximation Proximal gradient algorithm with extrapolation Gradient descent algorithm Image reconstruction
原文传递
ACCELERATED SYMMETRIC ADMM AND ITS APPLICATIONS IN LARGE-SCALE SIGNAL PROCESSING
20
作者 Jianchao Bai Ke Guo +2 位作者 Junli Liang Yang Jing H.C.So 《Journal of Computational Mathematics》 SCIE CSCD 2024年第6期1605-1626,共22页
The alternating direction method of multipliers(ADMM)has been extensively investigated in the past decades for solving separable convex optimization problems,and surprisingly,it also performs efficiently for nonconvex... The alternating direction method of multipliers(ADMM)has been extensively investigated in the past decades for solving separable convex optimization problems,and surprisingly,it also performs efficiently for nonconvex programs.In this paper,we propose a symmetric ADMM based on acceleration techniques for a family of potentially nonsmooth and nonconvex programming problems with equality constraints,where the dual variables are updated twice with different stepsizes.Under proper assumptions instead of the socalled Kurdyka-Lojasiewicz inequality,convergence of the proposed algorithm as well as its pointwise iteration-complexity are analyzed in terms of the corresponding augmented Lagrangian function and the primal-dual residuals,respectively.Performance of our algorithm is verified by numerical examples corresponding to signal processing applications in sparse nonconvex/convex regularized minimization. 展开更多
关键词 nonconvex optimization Symmetric ADMM Acceleration technique COMPLEXITY Signal processing
原文传递
上一页 1 2 下一页 到第
使用帮助 返回顶部