In this paper, we discuss foe characteristic matrices and similarity of timevarying linear differential systems with emphasis on the relationship of similarityand characteristic matrices of the systems, obtaining the ...In this paper, we discuss foe characteristic matrices and similarity of timevarying linear differential systems with emphasis on the relationship of similarityand characteristic matrices of the systems, obtaining the necessary and sufficientcondition for a matrix bein characteristic matrix, the necessary and sufficientcondition for time-varying linear systems being similar and the necessary andsufficient condition for a time-varying linear system being similar to a blockdiagonai system. We improve one of t,he main results of [4].展开更多
In high dimensional data, many dimensions are irrelevant to each other and clusters are usually hidden under noise. As an important extension of the traditional clustering, subspace clustering can be utilized to simul...In high dimensional data, many dimensions are irrelevant to each other and clusters are usually hidden under noise. As an important extension of the traditional clustering, subspace clustering can be utilized to simultaneously cluster the high dimensional data into several subspaces and associate the low-dimensional subspaces with the corresponding points. In subspace clustering, it is a crucial step to construct an affinity matrix with block-diagonal form, in which the blocks correspond to different clusters. The distance-based methods and the representation-based methods are two major types of approaches for building an informative affinity matrix. In general, it is the difference between the density inside and outside the blocks that determines the efficiency and accuracy of the clustering. In this work, we introduce a well-known approach in statistic physics method, namely link prediction, to enhance subspace clustering by reinforcing the affinity matrix. More importantly, we introduce the idea to combine complex network theory with machine learning. By revealing the hidden links inside each block, we maximize the density of each block along the diagonal, while restrain the remaining non-blocks in the affinity matrix as sparse as possible. Our method has been shown to have a remarkably improved clustering accuracy comparing with the existing methods on well-known datasets.展开更多
文摘In this paper, we discuss foe characteristic matrices and similarity of timevarying linear differential systems with emphasis on the relationship of similarityand characteristic matrices of the systems, obtaining the necessary and sufficientcondition for a matrix bein characteristic matrix, the necessary and sufficientcondition for time-varying linear systems being similar and the necessary andsufficient condition for a time-varying linear system being similar to a blockdiagonai system. We improve one of t,he main results of [4].
基金the National Natural Science Foundation of China (Grant Nos. 61433014 and 71601029).
文摘In high dimensional data, many dimensions are irrelevant to each other and clusters are usually hidden under noise. As an important extension of the traditional clustering, subspace clustering can be utilized to simultaneously cluster the high dimensional data into several subspaces and associate the low-dimensional subspaces with the corresponding points. In subspace clustering, it is a crucial step to construct an affinity matrix with block-diagonal form, in which the blocks correspond to different clusters. The distance-based methods and the representation-based methods are two major types of approaches for building an informative affinity matrix. In general, it is the difference between the density inside and outside the blocks that determines the efficiency and accuracy of the clustering. In this work, we introduce a well-known approach in statistic physics method, namely link prediction, to enhance subspace clustering by reinforcing the affinity matrix. More importantly, we introduce the idea to combine complex network theory with machine learning. By revealing the hidden links inside each block, we maximize the density of each block along the diagonal, while restrain the remaining non-blocks in the affinity matrix as sparse as possible. Our method has been shown to have a remarkably improved clustering accuracy comparing with the existing methods on well-known datasets.