We discuss estimates for the rate of convergence of the method of successive subspace corrections in terms of condition number estimate for the method of parallel subspace corrections.We provide upper bounds and in a ...We discuss estimates for the rate of convergence of the method of successive subspace corrections in terms of condition number estimate for the method of parallel subspace corrections.We provide upper bounds and in a special case,a lower bound for preconditioners defined via the method of successive subspace corrections.展开更多
This paper presents a coordinate gradient descent approach for minimizing the sum of a smooth function and a nonseparable convex function.We find a search direction by solving a subproblem obtained by a second-order a...This paper presents a coordinate gradient descent approach for minimizing the sum of a smooth function and a nonseparable convex function.We find a search direction by solving a subproblem obtained by a second-order approximation of the smooth function and adding a separable convex function.Under a local Lipschitzian error bound assumption,we show that the algorithm possesses global and local linear convergence properties.We also give some numerical tests(including image recovery examples) to illustrate the efficiency of the proposed method.展开更多
A Quasi-Newton method in Infinite-dimensional Spaces (QNIS) for solving operator equations is presellted and the convergence of a sequence generated by QNIS is also proved in the paper. Next, we suggest a finite-dimen...A Quasi-Newton method in Infinite-dimensional Spaces (QNIS) for solving operator equations is presellted and the convergence of a sequence generated by QNIS is also proved in the paper. Next, we suggest a finite-dimensional implementation of QNIS and prove that the sequence defined by the finite-dimensional algorithm converges to the root of the original operator equation providing that the later exists and that the Frechet derivative of the governing operator is invertible. Finally, we apply QNIS to an inverse problem for a parabolic differential equation to illustrate the efficiency of the finite-dimensional algorithm.展开更多
文摘We discuss estimates for the rate of convergence of the method of successive subspace corrections in terms of condition number estimate for the method of parallel subspace corrections.We provide upper bounds and in a special case,a lower bound for preconditioners defined via the method of successive subspace corrections.
基金supported by NSFC Grant 10601043,NCETXMUSRF for ROCS,SEM+2 种基金supported by RGC 201508HKBU FRGssupported by the Hong Kong Research Grant Council
文摘This paper presents a coordinate gradient descent approach for minimizing the sum of a smooth function and a nonseparable convex function.We find a search direction by solving a subproblem obtained by a second-order approximation of the smooth function and adding a separable convex function.Under a local Lipschitzian error bound assumption,we show that the algorithm possesses global and local linear convergence properties.We also give some numerical tests(including image recovery examples) to illustrate the efficiency of the proposed method.
文摘A Quasi-Newton method in Infinite-dimensional Spaces (QNIS) for solving operator equations is presellted and the convergence of a sequence generated by QNIS is also proved in the paper. Next, we suggest a finite-dimensional implementation of QNIS and prove that the sequence defined by the finite-dimensional algorithm converges to the root of the original operator equation providing that the later exists and that the Frechet derivative of the governing operator is invertible. Finally, we apply QNIS to an inverse problem for a parabolic differential equation to illustrate the efficiency of the finite-dimensional algorithm.