摘要
This note explores the relations between two different methods. The first one is the Alternating Least Squares (ALS) method for calculating a rank<em>-k</em> approximation of a real <em>m</em>×<em>n</em> matrix, <em>A</em>. This method has important applications in nonnegative matrix factorizations, in matrix completion problems, and in tensor approximations. The second method is called Orthogonal Iterations. Other names of this method are Subspace Iterations, Simultaneous Iterations, and block-Power method. Given a real symmetric matrix, <em>G</em>, this method computes<em> k</em> dominant eigenvectors of <em>G</em>. To see the relation between these methods we assume that <em>G </em>=<em> A</em><sup>T</sup> <em>A</em>. It is shown that in this case the two methods generate the same sequence of subspaces, and the same sequence of low-rank approximations. This equivalence provides new insight into the convergence properties of both methods.
This note explores the relations between two different methods. The first one is the Alternating Least Squares (ALS) method for calculating a rank<em>-k</em> approximation of a real <em>m</em>×<em>n</em> matrix, <em>A</em>. This method has important applications in nonnegative matrix factorizations, in matrix completion problems, and in tensor approximations. The second method is called Orthogonal Iterations. Other names of this method are Subspace Iterations, Simultaneous Iterations, and block-Power method. Given a real symmetric matrix, <em>G</em>, this method computes<em> k</em> dominant eigenvectors of <em>G</em>. To see the relation between these methods we assume that <em>G </em>=<em> A</em><sup>T</sup> <em>A</em>. It is shown that in this case the two methods generate the same sequence of subspaces, and the same sequence of low-rank approximations. This equivalence provides new insight into the convergence properties of both methods.
作者
Achiya Dax
Achiya Dax(Hydrological Service, Jerusalem, Israel)