This paper investigates the optimal recovery of Sobolev spaces W_(1)^(r)[-1,1],r∈N in the space L_(1)[-1,1].They obtain the values of the sampling numbers of W_(1)^(r)[-1,1]in L_(1)[-1,1]and show that the Lagrange in...This paper investigates the optimal recovery of Sobolev spaces W_(1)^(r)[-1,1],r∈N in the space L_(1)[-1,1].They obtain the values of the sampling numbers of W_(1)^(r)[-1,1]in L_(1)[-1,1]and show that the Lagrange interpolation algorithms based on the extreme points of Chebyshev polynomials are optimal algorithms.Meanwhile,they prove that the extreme points of Chebyshev polynomials are optimal Lagrange interpolation nodes.展开更多
The augmented Lagrangian method is a classical method for solving constrained optimization.Recently,the augmented Lagrangian method attracts much attention due to its applications to sparse optimization in compressive...The augmented Lagrangian method is a classical method for solving constrained optimization.Recently,the augmented Lagrangian method attracts much attention due to its applications to sparse optimization in compressive sensing and low rank matrix optimization problems.However,most Lagrangian methods use first order information to update the Lagrange multipliers,which lead to only linear convergence.In this paper,we study an update technique based on second order information and prove that superlinear convergence can be obtained.Theoretical properties of the update formula are given and some implementation issues regarding the new update are also discussed.展开更多
基金supported by the National Natural Science Foundation of China(Nos.11871006,11671271)。
文摘This paper investigates the optimal recovery of Sobolev spaces W_(1)^(r)[-1,1],r∈N in the space L_(1)[-1,1].They obtain the values of the sampling numbers of W_(1)^(r)[-1,1]in L_(1)[-1,1]and show that the Lagrange interpolation algorithms based on the extreme points of Chebyshev polynomials are optimal algorithms.Meanwhile,they prove that the extreme points of Chebyshev polynomials are optimal Lagrange interpolation nodes.
基金Supported by National Natural Science Foundation of China(Grant Nos.10831006,11021101)by CAS(Grant No.kjcx-yw-s7)
文摘The augmented Lagrangian method is a classical method for solving constrained optimization.Recently,the augmented Lagrangian method attracts much attention due to its applications to sparse optimization in compressive sensing and low rank matrix optimization problems.However,most Lagrangian methods use first order information to update the Lagrange multipliers,which lead to only linear convergence.In this paper,we study an update technique based on second order information and prove that superlinear convergence can be obtained.Theoretical properties of the update formula are given and some implementation issues regarding the new update are also discussed.