We present a new manifold learning algorithm called Local Orthogonality Preserving Alignment (LOPA). Our algorithm is inspired by the Local Tangent Space Alignment (LTSA) method that aims to align multiple local n...We present a new manifold learning algorithm called Local Orthogonality Preserving Alignment (LOPA). Our algorithm is inspired by the Local Tangent Space Alignment (LTSA) method that aims to align multiple local neighborhoods into a global coordinate system using affine transformations. However, LTSA often fails to preserve original geometric quantities such as distances and angles. Although an iterative alignment procedure for preserving orthogonality was suggested by the authors of LTSA, neither the corresponding initialization nor the experiments were given. Procrustes Subspaces Alignment (PSA) implements the orthogonality preserving idea by estimating each rotation transformation separately with simulated annealing. However, the optimization in PSA is complicated and multiple separated local rotations may produce globally contradictive results. To address these difficulties, we first use the pseudo-inverse trick of LTSA to represent each local orthogonal transformation with the unified global coordinates. Second the orthogonality constraints are relaxed to be an instance of semi-definite programming (SDP). Finally a two-step iterative procedure is employed to further reduce the errors in orthogonal constraints. Extensive experiments products, and neighborhoods of the original datasets. In that of PSA and comparable to that of state-of-the-art significantly faster than that of PSA, MVU and MVE. show that LOPA can faithfully preserve distances, angles, inner comparison, the embedding performance of LOPA is better than algorithms like MVU and MVE, while the runtime of LOPA is展开更多
基金This work was supported by the National Basic Research 973 Program of China under Grant No. 2011CB302202, the National Natural Science Foundation of China under Grant Nos. 61375051 and 61075119, and the Seeding Grant for Medicine and Information Sciences of Peking University of China under Grant No. 2014-MI-21.
文摘We present a new manifold learning algorithm called Local Orthogonality Preserving Alignment (LOPA). Our algorithm is inspired by the Local Tangent Space Alignment (LTSA) method that aims to align multiple local neighborhoods into a global coordinate system using affine transformations. However, LTSA often fails to preserve original geometric quantities such as distances and angles. Although an iterative alignment procedure for preserving orthogonality was suggested by the authors of LTSA, neither the corresponding initialization nor the experiments were given. Procrustes Subspaces Alignment (PSA) implements the orthogonality preserving idea by estimating each rotation transformation separately with simulated annealing. However, the optimization in PSA is complicated and multiple separated local rotations may produce globally contradictive results. To address these difficulties, we first use the pseudo-inverse trick of LTSA to represent each local orthogonal transformation with the unified global coordinates. Second the orthogonality constraints are relaxed to be an instance of semi-definite programming (SDP). Finally a two-step iterative procedure is employed to further reduce the errors in orthogonal constraints. Extensive experiments products, and neighborhoods of the original datasets. In that of PSA and comparable to that of state-of-the-art significantly faster than that of PSA, MVU and MVE. show that LOPA can faithfully preserve distances, angles, inner comparison, the embedding performance of LOPA is better than algorithms like MVU and MVE, while the runtime of LOPA is