The Alternating Direction Multiplier Method (ADMM) is widely used in various fields, and different variables are customized in the literature for different application scenarios [1] [2] [3] [4]. Among them, the linear...The Alternating Direction Multiplier Method (ADMM) is widely used in various fields, and different variables are customized in the literature for different application scenarios [1] [2] [3] [4]. Among them, the linearized alternating direction multiplier method (LADMM) has received extensive attention because of its effectiveness and ease of implementation. This paper mainly discusses the application of ADMM in dictionary learning (non-convex problem). Many numerical experiments show that to achieve higher convergence accuracy, the convergence speed of ADMM is slower, especially near the optimal solution. Therefore, we introduce the linearized alternating direction multiplier method (LADMM) to accelerate the convergence speed of ADMM. Specifically, the problem is solved by linearizing the quadratic term of the subproblem, and the convergence of the algorithm is proved. Finally, there is a brief summary of the full text.展开更多
We consider a wide range of non-convex regularized minimization problems, where the non-convex regularization term is composite with a linear function engaged in sparse learning. Recent theoretical investigations have...We consider a wide range of non-convex regularized minimization problems, where the non-convex regularization term is composite with a linear function engaged in sparse learning. Recent theoretical investigations have demonstrated their superiority over their convex counterparts. The computational challenge lies in the fact that the proximal mapping associated with non-convex regularization is not easily obtained due to the imposed linear composition. Fortunately, the problem structure allows one to introduce an auxiliary variable and reformulate it as an optimization problem with linear constraints, which can be solved using the Linearized Alternating Direction Method of Multipliers (LADMM). Despite the success of LADMM in practice, it remains unknown whether LADMM is convergent in solving such non-convex compositely regularized optimizations. In this research, we first present a detailed convergence analysis of the LADMM algorithm for solving a non-convex compositely regularized optimization problem with a large class of non-convex penalties. Furthermore, we propose an Adaptive LADMM (AdaLADMM) algorithm with a line-search criterion. Experimental results on different genres of datasets validate the efficacy of the proposed algorithm.展开更多
By using Leray-Schauder nonlinear alternative, Banach contraction theorem and Guo-Krasnosel’skii theorem, we discuss the existence, uniqueness and positivity of solution to the third-order multi-point nonhomogeneous ...By using Leray-Schauder nonlinear alternative, Banach contraction theorem and Guo-Krasnosel’skii theorem, we discuss the existence, uniqueness and positivity of solution to the third-order multi-point nonhomogeneous boundary value problem (BVP1): where for The interesting point lies in the fact that the nonlinear term is allowed to depend on the first order derivative .展开更多
文摘The Alternating Direction Multiplier Method (ADMM) is widely used in various fields, and different variables are customized in the literature for different application scenarios [1] [2] [3] [4]. Among them, the linearized alternating direction multiplier method (LADMM) has received extensive attention because of its effectiveness and ease of implementation. This paper mainly discusses the application of ADMM in dictionary learning (non-convex problem). Many numerical experiments show that to achieve higher convergence accuracy, the convergence speed of ADMM is slower, especially near the optimal solution. Therefore, we introduce the linearized alternating direction multiplier method (LADMM) to accelerate the convergence speed of ADMM. Specifically, the problem is solved by linearizing the quadratic term of the subproblem, and the convergence of the algorithm is proved. Finally, there is a brief summary of the full text.
基金supported by the National Natural Science Foundation of China(Nos.61303264,61202482,and 61202488)Guangxi Cooperative Innovation Center of Cloud Computing and Big Data(No.YD16505)Distinguished Young Scientist Promotion of National University of Defense Technology
文摘We consider a wide range of non-convex regularized minimization problems, where the non-convex regularization term is composite with a linear function engaged in sparse learning. Recent theoretical investigations have demonstrated their superiority over their convex counterparts. The computational challenge lies in the fact that the proximal mapping associated with non-convex regularization is not easily obtained due to the imposed linear composition. Fortunately, the problem structure allows one to introduce an auxiliary variable and reformulate it as an optimization problem with linear constraints, which can be solved using the Linearized Alternating Direction Method of Multipliers (LADMM). Despite the success of LADMM in practice, it remains unknown whether LADMM is convergent in solving such non-convex compositely regularized optimizations. In this research, we first present a detailed convergence analysis of the LADMM algorithm for solving a non-convex compositely regularized optimization problem with a large class of non-convex penalties. Furthermore, we propose an Adaptive LADMM (AdaLADMM) algorithm with a line-search criterion. Experimental results on different genres of datasets validate the efficacy of the proposed algorithm.
文摘By using Leray-Schauder nonlinear alternative, Banach contraction theorem and Guo-Krasnosel’skii theorem, we discuss the existence, uniqueness and positivity of solution to the third-order multi-point nonhomogeneous boundary value problem (BVP1): where for The interesting point lies in the fact that the nonlinear term is allowed to depend on the first order derivative .