Refined projection methods proposed by the author have received attention internationally. We are concerned with a conventional projection method and its refined counterpart for computing approximations to a simple ei...Refined projection methods proposed by the author have received attention internationally. We are concerned with a conventional projection method and its refined counterpart for computing approximations to a simple eigenpair (λ, x) of a large matrix A. Given a subspace ω that contains an approximation to x, these two methods compute approximations (μ,x) and μ,x) to (λ,x), respectively. We establish three results. First, the refined eigenvector approximation or simply the refined Ritz vector x is unique as the deviation of x from ω approaches zero if A is simple. Second, in terms of residual norm of the refined approximate eigenpair (μ, x), we derive lower and upper bounds for the sine of the angle between the Ritz vector x and the refined eigenvector approximation x, and we prove that x≠x unless x = x. Third, we establish relationships between the residual norm ||AX -μx|| of the conventional methods and the residual norm ||Ax -μx|| of the refined methods, and we show that the latter is always smaller than the former if (μ, x) is not an exact eigenpair of A, indicating that the refined projection method is superior to the corresponding conventional counterpart.展开更多
The two-sided Lanczos method is popular for computing a few selected eigentriplets of large non-Hermitian matrices. However, it has been revealed that the Ritz vectors gained by this method may not converge even if th...The two-sided Lanczos method is popular for computing a few selected eigentriplets of large non-Hermitian matrices. However, it has been revealed that the Ritz vectors gained by this method may not converge even if the subspaces are good enough and the associated eigenvalues converge. In order to remedy this drawback, a novel method is proposed which is based on the refined strategy, the quasi-refined idea and the Lanczos biothogonalization procedure, the resulting algorithm is presented. The relationship between the new method and the classical oblique projection technique is also established. We report some numerical experiments and compare the new algorithm with the conventional one, the results show that the former is often more powerful than the latter.展开更多
§1.引言
设A∈RM×N,定义增广矩阵
(A~)=(O A AT O),(1)
其中上标T表示转置.不失一般性,假设M≥N,设σi,i=1,2,…,N是A的奇异值,ui和ui分别是对应的左右奇异向量,奇异值按从小到大或从大到小的顺序排列,则A的特征值恰好为±...§1.引言
设A∈RM×N,定义增广矩阵
(A~)=(O A AT O),(1)
其中上标T表示转置.不失一般性,假设M≥N,设σi,i=1,2,…,N是A的奇异值,ui和ui分别是对应的左右奇异向量,奇异值按从小到大或从大到小的顺序排列,则A的特征值恰好为±σi,i=1,2,…,N和M-N个零,±σi对应的特征向量分别为1/√2(uT i,vT i)T和1/√2(uT i,-vT i)T.展开更多
The Ritz vectors obtained by Arnoldi's method may not be good approxima- tions and even may not converge even if the corresponding Ritz values do. In order to improve the quality of Ritz vectors and enhance the e...The Ritz vectors obtained by Arnoldi's method may not be good approxima- tions and even may not converge even if the corresponding Ritz values do. In order to improve the quality of Ritz vectors and enhance the efficiency of Arnoldi type algorithms, we propose a strategy that uses Ritz values obtained from an m-dimensional Krylov subspace but chooses modified approximate eigenvectors in an (m + 1)-dimensional Krylov subspace. Residual norm of each new approximate eigenpair is minimal over the span of the Ritz vector and the (m+1)th basis vector, which is available when the m-step Arnoldi process is run. The resulting modi- fied m-step Arnoldi method is better than the standard m-step one in theory and cheaper than the standard (m + 1)-step one. Based on this strategy, we present a modified m-step restarted Arnoldi algorithm. Numerical examples show that the modified m-step restarted algorithm and its version with Chebyshev acceleration are often considerably more efficient than the standard (m+ 1)-step restarted ones.展开更多
A large unsymmetric linear system problem is transformed into the problem of computing the eigenvector of a large symmetric nonnegative definite matrix associated with the eigenvalue zero, i.e., the computation of the...A large unsymmetric linear system problem is transformed into the problem of computing the eigenvector of a large symmetric nonnegative definite matrix associated with the eigenvalue zero, i.e., the computation of the elgenvector of the cross-product matrix of an augmented matrix associated with the eigenvalue zero. The standard Lanczos method and an improved refined Lanczos method are proposed that compute approximate eigenvectors and return approximate solutions of the linear system. An implicitly restarted Lanczos algorithm and its refined version are developed. Theoretical analysis and numerical experiments show the refined method is better than the standard one. If the large matrix has small eigenvalues, the two new algorithms are much faster than the unpreconditioned restarted GMRES.展开更多
The singular value decomposition problem is mathematically equivalent to the eigenproblem of an argumented matrix. Golub et al. give a bidiagonalization Lanczos method for computing a number of largest or smallest sin...The singular value decomposition problem is mathematically equivalent to the eigenproblem of an argumented matrix. Golub et al. give a bidiagonalization Lanczos method for computing a number of largest or smallest singular values and corresponding singular vertors, but the method may encounter some convergence problems. In this paper we analyse the convergence of the method and show why it may fail to converge. To correct this possible nonconvergence, we propose a refined bidiagonalization Lanczos method and apply the implicitly restarting technique to it, and we then present an implicitly restarted bidiagonalization Lanczos algorithm(IRBL) and an implicitly restarted refined bidiagonalization Lanczos algorithm (IRRBL). A new implicitly restarting scheme and a reliable and efficient algorithm for computing refined shifts are developed for this special structure eigenproblem.Theoretical analysis and numerical experiments show that IRRBL performs much better than IRBL.展开更多
文摘Refined projection methods proposed by the author have received attention internationally. We are concerned with a conventional projection method and its refined counterpart for computing approximations to a simple eigenpair (λ, x) of a large matrix A. Given a subspace ω that contains an approximation to x, these two methods compute approximations (μ,x) and μ,x) to (λ,x), respectively. We establish three results. First, the refined eigenvector approximation or simply the refined Ritz vector x is unique as the deviation of x from ω approaches zero if A is simple. Second, in terms of residual norm of the refined approximate eigenpair (μ, x), we derive lower and upper bounds for the sine of the angle between the Ritz vector x and the refined eigenvector approximation x, and we prove that x≠x unless x = x. Third, we establish relationships between the residual norm ||AX -μx|| of the conventional methods and the residual norm ||Ax -μx|| of the refined methods, and we show that the latter is always smaller than the former if (μ, x) is not an exact eigenpair of A, indicating that the refined projection method is superior to the corresponding conventional counterpart.
基金Supported by the National Natural Science Foundation of China (Project 10171021)
文摘The two-sided Lanczos method is popular for computing a few selected eigentriplets of large non-Hermitian matrices. However, it has been revealed that the Ritz vectors gained by this method may not converge even if the subspaces are good enough and the associated eigenvalues converge. In order to remedy this drawback, a novel method is proposed which is based on the refined strategy, the quasi-refined idea and the Lanczos biothogonalization procedure, the resulting algorithm is presented. The relationship between the new method and the classical oblique projection technique is also established. We report some numerical experiments and compare the new algorithm with the conventional one, the results show that the former is often more powerful than the latter.
基金the China State Key Project for Basic Researchesthe National Natural Science Foundation of ChinaThe Research Fund for th
文摘The Ritz vectors obtained by Arnoldi's method may not be good approxima- tions and even may not converge even if the corresponding Ritz values do. In order to improve the quality of Ritz vectors and enhance the efficiency of Arnoldi type algorithms, we propose a strategy that uses Ritz values obtained from an m-dimensional Krylov subspace but chooses modified approximate eigenvectors in an (m + 1)-dimensional Krylov subspace. Residual norm of each new approximate eigenpair is minimal over the span of the Ritz vector and the (m+1)th basis vector, which is available when the m-step Arnoldi process is run. The resulting modi- fied m-step Arnoldi method is better than the standard m-step one in theory and cheaper than the standard (m + 1)-step one. Based on this strategy, we present a modified m-step restarted Arnoldi algorithm. Numerical examples show that the modified m-step restarted algorithm and its version with Chebyshev acceleration are often considerably more efficient than the standard (m+ 1)-step restarted ones.
文摘A large unsymmetric linear system problem is transformed into the problem of computing the eigenvector of a large symmetric nonnegative definite matrix associated with the eigenvalue zero, i.e., the computation of the elgenvector of the cross-product matrix of an augmented matrix associated with the eigenvalue zero. The standard Lanczos method and an improved refined Lanczos method are proposed that compute approximate eigenvectors and return approximate solutions of the linear system. An implicitly restarted Lanczos algorithm and its refined version are developed. Theoretical analysis and numerical experiments show the refined method is better than the standard one. If the large matrix has small eigenvalues, the two new algorithms are much faster than the unpreconditioned restarted GMRES.
文摘The singular value decomposition problem is mathematically equivalent to the eigenproblem of an argumented matrix. Golub et al. give a bidiagonalization Lanczos method for computing a number of largest or smallest singular values and corresponding singular vertors, but the method may encounter some convergence problems. In this paper we analyse the convergence of the method and show why it may fail to converge. To correct this possible nonconvergence, we propose a refined bidiagonalization Lanczos method and apply the implicitly restarting technique to it, and we then present an implicitly restarted bidiagonalization Lanczos algorithm(IRBL) and an implicitly restarted refined bidiagonalization Lanczos algorithm (IRRBL). A new implicitly restarting scheme and a reliable and efficient algorithm for computing refined shifts are developed for this special structure eigenproblem.Theoretical analysis and numerical experiments show that IRRBL performs much better than IRBL.