A number of previous papers have studied the problem of recovering low-rank matrices with noise, further combining the noisy and perturbed cases, we propose a nonconvex Schatten p-norm minimization method to deal with...A number of previous papers have studied the problem of recovering low-rank matrices with noise, further combining the noisy and perturbed cases, we propose a nonconvex Schatten p-norm minimization method to deal with the recovery of fully perturbed low-rank matrices. By utilizing the p-null space property (p-NSP) and the p-restricted isometry property (p-RIP) of the matrix, sufficient conditions to ensure that the stable and accurate reconstruction for low-rank matrix in the case of full perturbation are derived, and two upper bound recovery error estimation ns are given. These estimations are characterized by two vital aspects, one involving the best r-approximation error and the other concerning the overall noise. Specifically, this paper obtains two new error upper bounds based on the fact that p-RIP and p-NSP are able to recover accurately and stably low-rank matrix, and to some extent improve the conditions corresponding to RIP.展开更多
Principal Component Analysis (PCA) is a widely used technique for data analysis and dimensionality reduction, but its sensitivity to feature scale and outliers limits its applicability. Robust Principal Component Anal...Principal Component Analysis (PCA) is a widely used technique for data analysis and dimensionality reduction, but its sensitivity to feature scale and outliers limits its applicability. Robust Principal Component Analysis (RPCA) addresses these limitations by decomposing data into a low-rank matrix capturing the underlying structure and a sparse matrix identifying outliers, enhancing robustness against noise and outliers. This paper introduces a novel RPCA variant, Robust PCA Integrating Sparse and Low-rank Priors (RPCA-SL). Each prior targets a specific aspect of the data’s underlying structure and their combination allows for a more nuanced and accurate separation of the main data components from outliers and noise. Then RPCA-SL is solved by employing a proximal gradient algorithm for improved anomaly detection and data decomposition. Experimental results on simulation and real data demonstrate significant advancements.展开更多
The method of recovering a low-rank matrix with an unknown fraction whose entries are arbitrarily corrupted is known as the robust principal component analysis (RPCA). This RPCA problem, under some conditions, can b...The method of recovering a low-rank matrix with an unknown fraction whose entries are arbitrarily corrupted is known as the robust principal component analysis (RPCA). This RPCA problem, under some conditions, can be exactly solved via convex optimization by minimizing a combination of the nuclear norm and the 11 norm. In this paper, an algorithm based on the Douglas-Rachford splitting method is proposed for solving the RPCA problem. First, the convex optimization problem is solved by canceling the constraint of the variables, and ~hen the proximity operators of the objective function are computed alternately. The new algorithm can exactly recover the low-rank and sparse components simultaneously, and it is proved to be convergent. Numerical simulations demonstrate the practical utility of the proposed algorithm.展开更多
Due to the fine-grained communication scenarios characterization and stability,Wi-Fi channel state information(CSI)has been increasingly applied to indoor sensing tasks recently.Although spatial variations are explici...Due to the fine-grained communication scenarios characterization and stability,Wi-Fi channel state information(CSI)has been increasingly applied to indoor sensing tasks recently.Although spatial variations are explicitlyreflected in CSI measurements,the representation differences caused by small contextual changes are easilysubmerged in the fluctuations of multipath effects,especially in device-free Wi-Fi sensing.Most existing datasolutions cannot fully exploit the temporal,spatial,and frequency information carried by CSI,which results ininsufficient sensing resolution for indoor scenario changes.As a result,the well-liked machine learning(ML)-based CSI sensing models still struggling with stable performance.This paper formulates a time-frequency matrixon the premise of demonstrating that the CSI has low-rank potential and then proposes a distributed factorizationalgorithm to effectively separate the stable structured information and context fluctuations in the CSI matrix.Finally,a multidimensional tensor is generated by combining the time-frequency gradients of CSI,which containsrich and fine-grained real-time contextual information.Extensive evaluations and case studies highlight thesuperiority of the proposal.展开更多
In this paper,we study the low-rank matrix completion problem with Poisson observations,where only partial entries are available and the observations are in the presence of Poisson noise.We propose a novel model compo...In this paper,we study the low-rank matrix completion problem with Poisson observations,where only partial entries are available and the observations are in the presence of Poisson noise.We propose a novel model composed of the Kullback-Leibler(KL)divergence by using the maximum likelihood estimation of Poisson noise,and total variation(TV)and nuclear norm constraints.Here the nuclear norm and TV constraints are utilized to explore the approximate low-rankness and piecewise smoothness of the underlying matrix,respectively.The advantage of these two constraints in the proposed model is that the low-rankness and piecewise smoothness of the underlying matrix can be exploited simultaneously,and they can be regularized for many real-world image data.An upper error bound of the estimator of the proposed model is established with high probability,which is not larger than that of only TV or nuclear norm constraint.To the best of our knowledge,this is the first work to utilize both low-rank and TV constraints with theoretical error bounds for matrix completion under Poisson observations.Extensive numerical examples on both synthetic data and real-world images are reported to corroborate the superiority of the proposed approach.展开更多
Low-rank matrix decomposition with first-order total variation(TV)regularization exhibits excellent performance in exploration of image structure.Taking advantage of its excellent performance in image denoising,we app...Low-rank matrix decomposition with first-order total variation(TV)regularization exhibits excellent performance in exploration of image structure.Taking advantage of its excellent performance in image denoising,we apply it to improve the robustness of deep neural networks.However,although TV regularization can improve the robustness of the model,it reduces the accuracy of normal samples due to its over-smoothing.In our work,we develop a new low-rank matrix recovery model,called LRTGV,which incorporates total generalized variation(TGV)regularization into the reweighted low-rank matrix recovery model.In the proposed model,TGV is used to better reconstruct texture information without over-smoothing.The reweighted nuclear norm and Li-norm can enhance the global structure information.Thus,the proposed LRTGV can destroy the structure of adversarial noise while re-enhancing the global structure and local texture of the image.To solve the challenging optimal model issue,we propose an algorithm based on the alternating direction method of multipliers.Experimental results show that the proposed algorithm has a certain defense capability against black-box attacks,and outperforms state-of-the-art low-rank matrix recovery methods in image restoration.展开更多
As the development of smart grid and energy internet, this leads to a significantincrease in the amount of data transmitted in real time. Due to the mismatch withcommunication networks that were not designed to carry ...As the development of smart grid and energy internet, this leads to a significantincrease in the amount of data transmitted in real time. Due to the mismatch withcommunication networks that were not designed to carry high-speed and real time data,data losses and data quality degradation may happen constantly. For this problem,according to the strong spatial and temporal correlation of electricity data which isgenerated by human’s actions and feelings, we build a low-rank electricity data matrixwhere the row is time and the column is user. Inspired by matrix decomposition, we dividethe low-rank electricity data matrix into the multiply of two small matrices and use theknown data to approximate the low-rank electricity data matrix and recover the missedelectrical data. Based on the real electricity data, we analyze the low-rankness of theelectricity data matrix and perform the Matrix Decomposition-based method on the realdata. The experimental results verify the efficiency and efficiency of the proposed scheme.展开更多
Given a symmetric matrix X, we consider the problem of finding a low-rank positive approximant of X. That is, a symmetric positive semidefinite matrix, S, whose rank is smaller than a given positive integer, , which i...Given a symmetric matrix X, we consider the problem of finding a low-rank positive approximant of X. That is, a symmetric positive semidefinite matrix, S, whose rank is smaller than a given positive integer, , which is nearest to X in a certain matrix norm. The problem is first solved with regard to four common norms: The Frobenius norm, the Schatten p-norm, the trace norm, and the spectral norm. Then the solution is extended to any unitarily invariant matrix norm. The proof is based on a subtle combination of Ky Fan dominance theorem, a modified pinching principle, and Mirsky minimum-norm theorem.展开更多
In this paper, we investigate the recovery of an undamped spectrally sparse signal and its spectral components from a set of regularly spaced samples within the framework of spectral compressed sensing and super-resol...In this paper, we investigate the recovery of an undamped spectrally sparse signal and its spectral components from a set of regularly spaced samples within the framework of spectral compressed sensing and super-resolution. We show that the existing Hankel-based optimization methods suffer from the fundamental limitation that the prior knowledge of undampedness cannot be exploited. We propose a new low-rank optimization model partially inspired by forward-backward processing for line spectral estimation and show its capability to restrict the spectral poles to the unit circle. We present convex relaxation approaches with the model and show their provable accuracy and robustness to bounded and sparse noise. All our results are generalized from one-dimensional to arbitrary-dimensional spectral compressed sensing. Numerical simulations are provided to corroborate our analysis and show the efficiency of our model and the advantageous performance of our approach in terms of accuracy and resolution compared with the state-of-the-art Hankel and atomic norm methods.展开更多
In this paper, a sufficient condition is obtained to ensure the stable recovery(ε≠ 0) or exact recovery(ε = 0) of all r-rank matrices X ∈ Rm×nfrom b = A(X) + z via nonconvex Schatten p-minimization for anyδ4...In this paper, a sufficient condition is obtained to ensure the stable recovery(ε≠ 0) or exact recovery(ε = 0) of all r-rank matrices X ∈ Rm×nfrom b = A(X) + z via nonconvex Schatten p-minimization for anyδ4r∈ [3~(1/2))2, 1). Moreover, we determine the range of parameter p with any given δ4r∈ [(3~(1/2))/22, 1). In fact, for any given δ4r∈ [3~(1/2))2, 1), p ∈(0, 2(1- δ4r)] suffices for the stable recovery or exact recovery of all r-rank matrices.展开更多
This paper aims at achieving a simultaneously sparse and low-rank estimator from the semidefinite population covariance matrices.We first benefit from a convex optimization which develops l1-norm penalty to encourage ...This paper aims at achieving a simultaneously sparse and low-rank estimator from the semidefinite population covariance matrices.We first benefit from a convex optimization which develops l1-norm penalty to encourage the sparsity and nuclear norm to favor the low-rank property.For the proposed estimator,we then prove that with high probability,the Frobenius norm of the estimation rate can be of order O(√((slgg p)/n))under a mild case,where s and p denote the number of nonzero entries and the dimension of the population covariance,respectively and n notes the sample capacity.Finally,an efficient alternating direction method of multipliers with global convergence is proposed to tackle this problem,and merits of the approach are also illustrated by practicing numerical simulations.展开更多
This paper considers approximately sparse signal and low-rank matrix’s recovery via truncated norm minimization minx∥xT∥q and minX∥XT∥Sq from noisy measurements.We first introduce truncated sparse approximation p...This paper considers approximately sparse signal and low-rank matrix’s recovery via truncated norm minimization minx∥xT∥q and minX∥XT∥Sq from noisy measurements.We first introduce truncated sparse approximation property,a more general robust null space property,and establish the stable recovery of signals and matrices under the truncated sparse approximation property.We also explore the relationship between the restricted isometry property and truncated sparse approximation property.And we also prove that if a measurement matrix A or linear map A satisfies truncated sparse approximation property of order k,then the first inequality in restricted isometry property of order k and of order 2k can hold for certain different constantsδk andδ2k,respectively.Last,we show that ifδs(k+|T^c|)<√(s-1)/s for some s≥4/3,then measurement matrix A and linear map A satisfy truncated sparse approximation property of order k.It should be pointed out that when Tc=Ф,our conclusion implies that sparse approximation property of order k is weaker than restricted isometry property of order sk.展开更多
As a kind of weaker supervisory information, pairwise constraints can be exploited to guide the data analysis process, such as data clustering. This paper formulates pairwise constraint propagation, which aims to pred...As a kind of weaker supervisory information, pairwise constraints can be exploited to guide the data analysis process, such as data clustering. This paper formulates pairwise constraint propagation, which aims to predict the large quantity of unknown constraints from scarce known constraints, as a low-rank matrix recovery(LMR) problem. Although recent advances in transductive learning based on matrix completion can be directly adopted to solve this problem, our work intends to develop a more general low-rank matrix recovery solution for pairwise constraint propagation, which not only completes the unknown entries in the constraint matrix but also removes the noise from the data matrix. The problem can be effectively solved using an augmented Lagrange multiplier method. Experimental results on constrained clustering tasks based on the propagated pairwise constraints have shown that our method can obtain more stable results than state-of-the-art algorithms,and outperform them.展开更多
Tensor robust principal component analysis has received a substantial amount of attention in various fields.Most existing methods,normally relying on tensor nuclear norm minimization,need to pay an expensive computati...Tensor robust principal component analysis has received a substantial amount of attention in various fields.Most existing methods,normally relying on tensor nuclear norm minimization,need to pay an expensive computational cost due to multiple singular value decompositions at each iteration.To overcome the drawback,we propose a scalable and efficient method,named parallel active subspace decomposition,which divides the unfolding along each mode of the tensor into a columnwise orthonormal matrix(active subspace)and another small-size matrix in parallel.Such a transformation leads to a nonconvex optimization problem in which the scale of nuclear norm minimization is generally much smaller than that in the original problem.We solve the optimization problem by an alternating direction method of multipliers and show that the iterates can be convergent within the given stopping criterion and the convergent solution is close to the global optimum solution within the prescribed bound.Experimental results are given to demonstrate that the performance of the proposed model is better than the state-of-the-art methods.展开更多
We present our recent work on both linear and nonlinear data reduction methods and algorithms: for the linear case we discuss results on structure analysis of SVD of columnpartitioned matrices and sparse low-rank appr...We present our recent work on both linear and nonlinear data reduction methods and algorithms: for the linear case we discuss results on structure analysis of SVD of columnpartitioned matrices and sparse low-rank approximation; for the nonlinear case we investigate methods for nonlinear dimensionality reduction and manifold learning. The problems we address have attracted great deal of interest in data mining and machine learning.展开更多
In applications involving,e.g.,panel data,images,genomics microarrays,etc.,trace regression models are useful tools.To address the high-dimensional issue of these applications,it is common to assume some sparsity prop...In applications involving,e.g.,panel data,images,genomics microarrays,etc.,trace regression models are useful tools.To address the high-dimensional issue of these applications,it is common to assume some sparsity property.For the case of the parameter matrix being simultaneously low rank and elements-wise sparse,we estimate the parameter matrix through the least-squares approach with the composite penalty combining the nuclear norm and the l1norm.We extend the existing analysis of the low-rank trace regression with i.i.d.errors to exponentialβ-mixing errors.The explicit convergence rate and the asymptotic properties of the proposed estimator are established.Simulations,as well as a real data application,are also carried out for illustration.展开更多
Knowledge graph embedding, which maps the entities and relations into low-dimensional vector spaces, has demonstrated its effectiveness in many tasks such as link prediction and relation extraction. Typical methods in...Knowledge graph embedding, which maps the entities and relations into low-dimensional vector spaces, has demonstrated its effectiveness in many tasks such as link prediction and relation extraction. Typical methods include TransE, TransH, and TransR. All these methods map different relations into the vector space separately and the intrinsic correlations of these relations are ignored. It is obvious that there exist some correlations among relations because different relations may connect to a common entity. For example, the triples (Steve Jobs, PlaceOfBrith, California) and (Apple Inc., Location, California) share the same entity California as their tail entity. We analyze the embedded relation matrices learned by TransE/TransH/TransR, and find that the correlations of relations do exist and they are showed as low-rank structure over the embedded relation matrix. It is natural to ask whether we can leverage these correlations to learn better embeddings for the entities and relations in a knowledge graph. In this paper, we propose to learn the embedded relation matrix by decomposing it as a product of two low-dimensional matrices, for characterizing the low-rank structure. The proposed method, called TransCoRe (Translation-Based Method via Modeling the Correlations of Relations), learns the embeddings of entities and relations with translation-based framework. Experimental results based on the benchmark datasets of WordNet and Freebase demonstrate that our method outperforms the typical baselines on link prediction and triple classification tasks.展开更多
This paper discusses conditions under which the solution of linear system with minimal Schatten-p norm, 0 〈 p ≤ 1, is also the lowest-rank solution of this linear system. To study this problem, an important tool is ...This paper discusses conditions under which the solution of linear system with minimal Schatten-p norm, 0 〈 p ≤ 1, is also the lowest-rank solution of this linear system. To study this problem, an important tool is the restricted isometry constant (RIC). Some papers provided the upper bounds of RIC to guarantee that the nuclear-norm minimization stably recovers a low-rank matrix. For example, Fazel improved the upper bounds to δ4Ar 〈 0.558 and δ3rA 〈 0.4721, respectively. Recently, the upper bounds of RIC can be improved to δ2rA 〈 0.307. In fact, by using some methods, the upper bounds of RIC can be improved to δ2tA 〈 0.4931 and δrA 〈 0.309. In this paper, we focus on the lower bounds of RIC, we show that there exists linear maps A with δ2rA 〉1√2 or δrA 〉 1/3 for which nuclear norm recovery fail on some matrix with rank at most r. These results indicate that there is only a little limited room for improving the upper bounds for δ2rA and δrA.Furthermore, we also discuss the upper bound of restricted isometry constant associated with linear maps A for Schatten p (0 〈 p 〈 1) quasi norm minimization problem.展开更多
In high dimensional data, many dimensions are irrelevant to each other and clusters are usually hidden under noise. As an important extension of the traditional clustering, subspace clustering can be utilized to simul...In high dimensional data, many dimensions are irrelevant to each other and clusters are usually hidden under noise. As an important extension of the traditional clustering, subspace clustering can be utilized to simultaneously cluster the high dimensional data into several subspaces and associate the low-dimensional subspaces with the corresponding points. In subspace clustering, it is a crucial step to construct an affinity matrix with block-diagonal form, in which the blocks correspond to different clusters. The distance-based methods and the representation-based methods are two major types of approaches for building an informative affinity matrix. In general, it is the difference between the density inside and outside the blocks that determines the efficiency and accuracy of the clustering. In this work, we introduce a well-known approach in statistic physics method, namely link prediction, to enhance subspace clustering by reinforcing the affinity matrix. More importantly, we introduce the idea to combine complex network theory with machine learning. By revealing the hidden links inside each block, we maximize the density of each block along the diagonal, while restrain the remaining non-blocks in the affinity matrix as sparse as possible. Our method has been shown to have a remarkably improved clustering accuracy comparing with the existing methods on well-known datasets.展开更多
Hybrid precoding is a cost-effective approach to support directional transmissions for millimeter-wave(mmWave)communications,but its precoder design is highly complicated.In this paper,we propose a new hybrid precoder...Hybrid precoding is a cost-effective approach to support directional transmissions for millimeter-wave(mmWave)communications,but its precoder design is highly complicated.In this paper,we propose a new hybrid precoder implementation,namely the double phase shifter(DPS)implementation,which enables highly tractable hybrid precoder design.Efficient algorithms are then developed for two popular hybrid precoder structures,i.e.,the fully-and partially-connected structures.For the fully-connected one,the RF-only precoding and hybrid precoding problems are formulated as a least absolute shrinkage and selection operator problem and a low-rank matrix approximation problem,respectively.In this way,computationally efficient algorithms are provided to approach the performance of the fully digital one with a small number of radio frequency(RF)chains.On the other hand,the hybrid precoder design in the partially-connected structure is identified as an eigenvalue problem.To enhance the performance of this cost-effective structure,dynamic mapping from RF chains to antennas is further proposed,for which a greedy algorithm and a modified K-means algorithm are developed.Simulation results demonstrate the performance gains of the proposed hybrid precoding algorithms over existing ones.It shows that,with the proposed DPS implementation,the fully-connected structure enjoys both satisfactory performance and low design complexity while the partially-connected one serves as an economic solution with low hardware complexity.展开更多
文摘A number of previous papers have studied the problem of recovering low-rank matrices with noise, further combining the noisy and perturbed cases, we propose a nonconvex Schatten p-norm minimization method to deal with the recovery of fully perturbed low-rank matrices. By utilizing the p-null space property (p-NSP) and the p-restricted isometry property (p-RIP) of the matrix, sufficient conditions to ensure that the stable and accurate reconstruction for low-rank matrix in the case of full perturbation are derived, and two upper bound recovery error estimation ns are given. These estimations are characterized by two vital aspects, one involving the best r-approximation error and the other concerning the overall noise. Specifically, this paper obtains two new error upper bounds based on the fact that p-RIP and p-NSP are able to recover accurately and stably low-rank matrix, and to some extent improve the conditions corresponding to RIP.
文摘Principal Component Analysis (PCA) is a widely used technique for data analysis and dimensionality reduction, but its sensitivity to feature scale and outliers limits its applicability. Robust Principal Component Analysis (RPCA) addresses these limitations by decomposing data into a low-rank matrix capturing the underlying structure and a sparse matrix identifying outliers, enhancing robustness against noise and outliers. This paper introduces a novel RPCA variant, Robust PCA Integrating Sparse and Low-rank Priors (RPCA-SL). Each prior targets a specific aspect of the data’s underlying structure and their combination allows for a more nuanced and accurate separation of the main data components from outliers and noise. Then RPCA-SL is solved by employing a proximal gradient algorithm for improved anomaly detection and data decomposition. Experimental results on simulation and real data demonstrate significant advancements.
基金supported by the National Natural Science Foundation of China(No.61271014)the Specialized Research Fund for the Doctoral Program of Higher Education(No.20124301110003)the Graduated Students Innovation Fund of Hunan Province(No.CX2012B238)
文摘The method of recovering a low-rank matrix with an unknown fraction whose entries are arbitrarily corrupted is known as the robust principal component analysis (RPCA). This RPCA problem, under some conditions, can be exactly solved via convex optimization by minimizing a combination of the nuclear norm and the 11 norm. In this paper, an algorithm based on the Douglas-Rachford splitting method is proposed for solving the RPCA problem. First, the convex optimization problem is solved by canceling the constraint of the variables, and ~hen the proximity operators of the objective function are computed alternately. The new algorithm can exactly recover the low-rank and sparse components simultaneously, and it is proved to be convergent. Numerical simulations demonstrate the practical utility of the proposed algorithm.
基金the National Natural Science Foundation of China under Grant 61771258 and Grant U1804142the Key Science and Technology Project of Henan Province under Grants 202102210280,212102210159,222102210192,232102210051the Key Scientific Research Projects of Colleges and Universities in Henan Province under Grant 20B460008.
文摘Due to the fine-grained communication scenarios characterization and stability,Wi-Fi channel state information(CSI)has been increasingly applied to indoor sensing tasks recently.Although spatial variations are explicitlyreflected in CSI measurements,the representation differences caused by small contextual changes are easilysubmerged in the fluctuations of multipath effects,especially in device-free Wi-Fi sensing.Most existing datasolutions cannot fully exploit the temporal,spatial,and frequency information carried by CSI,which results ininsufficient sensing resolution for indoor scenario changes.As a result,the well-liked machine learning(ML)-based CSI sensing models still struggling with stable performance.This paper formulates a time-frequency matrixon the premise of demonstrating that the CSI has low-rank potential and then proposes a distributed factorizationalgorithm to effectively separate the stable structured information and context fluctuations in the CSI matrix.Finally,a multidimensional tensor is generated by combining the time-frequency gradients of CSI,which containsrich and fine-grained real-time contextual information.Extensive evaluations and case studies highlight thesuperiority of the proposal.
基金supported in part by the National Natural Science Foundation of China(Grant No.12201473)by the Science Foundation of Wuhan Institute of Technology(Grant No.K202256)+3 种基金The research of M.K.Ng was supported in part by the HKRGC GRF(Grant Nos.12300218,12300519,17201020,17300021)The research of X.Zhang was supported in part by the National Natural Science Foundation of China(Grant No.12171189)by the Knowledge Innovation Project of Wuhan(Grant No.2022010801020279)by the Fundamental Research Funds for the Central Universities(Grant No.CCNU22JC023).
文摘In this paper,we study the low-rank matrix completion problem with Poisson observations,where only partial entries are available and the observations are in the presence of Poisson noise.We propose a novel model composed of the Kullback-Leibler(KL)divergence by using the maximum likelihood estimation of Poisson noise,and total variation(TV)and nuclear norm constraints.Here the nuclear norm and TV constraints are utilized to explore the approximate low-rankness and piecewise smoothness of the underlying matrix,respectively.The advantage of these two constraints in the proposed model is that the low-rankness and piecewise smoothness of the underlying matrix can be exploited simultaneously,and they can be regularized for many real-world image data.An upper error bound of the estimator of the proposed model is established with high probability,which is not larger than that of only TV or nuclear norm constraint.To the best of our knowledge,this is the first work to utilize both low-rank and TV constraints with theoretical error bounds for matrix completion under Poisson observations.Extensive numerical examples on both synthetic data and real-world images are reported to corroborate the superiority of the proposed approach.
基金Project supported by the National Natural Science Foundation of China(No.62072024)the Outstanding Youth Program of Beijing University of Civil Engineering and Architecture,China(No.JDJQ20220805)the Shenzhen Stability Support General Project(Type A),China(No.20200826104014001)。
文摘Low-rank matrix decomposition with first-order total variation(TV)regularization exhibits excellent performance in exploration of image structure.Taking advantage of its excellent performance in image denoising,we apply it to improve the robustness of deep neural networks.However,although TV regularization can improve the robustness of the model,it reduces the accuracy of normal samples due to its over-smoothing.In our work,we develop a new low-rank matrix recovery model,called LRTGV,which incorporates total generalized variation(TGV)regularization into the reweighted low-rank matrix recovery model.In the proposed model,TGV is used to better reconstruct texture information without over-smoothing.The reweighted nuclear norm and Li-norm can enhance the global structure information.Thus,the proposed LRTGV can destroy the structure of adversarial noise while re-enhancing the global structure and local texture of the image.To solve the challenging optimal model issue,we propose an algorithm based on the alternating direction method of multipliers.Experimental results show that the proposed algorithm has a certain defense capability against black-box attacks,and outperforms state-of-the-art low-rank matrix recovery methods in image restoration.
文摘As the development of smart grid and energy internet, this leads to a significantincrease in the amount of data transmitted in real time. Due to the mismatch withcommunication networks that were not designed to carry high-speed and real time data,data losses and data quality degradation may happen constantly. For this problem,according to the strong spatial and temporal correlation of electricity data which isgenerated by human’s actions and feelings, we build a low-rank electricity data matrixwhere the row is time and the column is user. Inspired by matrix decomposition, we dividethe low-rank electricity data matrix into the multiply of two small matrices and use theknown data to approximate the low-rank electricity data matrix and recover the missedelectrical data. Based on the real electricity data, we analyze the low-rankness of theelectricity data matrix and perform the Matrix Decomposition-based method on the realdata. The experimental results verify the efficiency and efficiency of the proposed scheme.
文摘Given a symmetric matrix X, we consider the problem of finding a low-rank positive approximant of X. That is, a symmetric positive semidefinite matrix, S, whose rank is smaller than a given positive integer, , which is nearest to X in a certain matrix norm. The problem is first solved with regard to four common norms: The Frobenius norm, the Schatten p-norm, the trace norm, and the spectral norm. Then the solution is extended to any unitarily invariant matrix norm. The proof is based on a subtle combination of Ky Fan dominance theorem, a modified pinching principle, and Mirsky minimum-norm theorem.
基金supported by National Natural Science Foundation of China (Grant Nos. 61977053 and 11922116)。
文摘In this paper, we investigate the recovery of an undamped spectrally sparse signal and its spectral components from a set of regularly spaced samples within the framework of spectral compressed sensing and super-resolution. We show that the existing Hankel-based optimization methods suffer from the fundamental limitation that the prior knowledge of undampedness cannot be exploited. We propose a new low-rank optimization model partially inspired by forward-backward processing for line spectral estimation and show its capability to restrict the spectral poles to the unit circle. We present convex relaxation approaches with the model and show their provable accuracy and robustness to bounded and sparse noise. All our results are generalized from one-dimensional to arbitrary-dimensional spectral compressed sensing. Numerical simulations are provided to corroborate our analysis and show the efficiency of our model and the advantageous performance of our approach in terms of accuracy and resolution compared with the state-of-the-art Hankel and atomic norm methods.
基金supported by National Natural Science Foundation of China(Grant Nos.11271050 and 11371183)Beijing Center for Mathematics and Information Interdisciplinary Sciences
文摘In this paper, a sufficient condition is obtained to ensure the stable recovery(ε≠ 0) or exact recovery(ε = 0) of all r-rank matrices X ∈ Rm×nfrom b = A(X) + z via nonconvex Schatten p-minimization for anyδ4r∈ [3~(1/2))2, 1). Moreover, we determine the range of parameter p with any given δ4r∈ [(3~(1/2))/22, 1). In fact, for any given δ4r∈ [3~(1/2))2, 1), p ∈(0, 2(1- δ4r)] suffices for the stable recovery or exact recovery of all r-rank matrices.
基金The work was supported in part by the National Natural Science Foundation of China(Nos.11431002,11171018,71271021,11301022).
文摘This paper aims at achieving a simultaneously sparse and low-rank estimator from the semidefinite population covariance matrices.We first benefit from a convex optimization which develops l1-norm penalty to encourage the sparsity and nuclear norm to favor the low-rank property.For the proposed estimator,we then prove that with high probability,the Frobenius norm of the estimation rate can be of order O(√((slgg p)/n))under a mild case,where s and p denote the number of nonzero entries and the dimension of the population covariance,respectively and n notes the sample capacity.Finally,an efficient alternating direction method of multipliers with global convergence is proposed to tackle this problem,and merits of the approach are also illustrated by practicing numerical simulations.
基金supported by the National Natural Science Foundation of China(11871109)NSAF(U1830107)the Science Challenge Project(TZ2018001)
文摘This paper considers approximately sparse signal and low-rank matrix’s recovery via truncated norm minimization minx∥xT∥q and minX∥XT∥Sq from noisy measurements.We first introduce truncated sparse approximation property,a more general robust null space property,and establish the stable recovery of signals and matrices under the truncated sparse approximation property.We also explore the relationship between the restricted isometry property and truncated sparse approximation property.And we also prove that if a measurement matrix A or linear map A satisfies truncated sparse approximation property of order k,then the first inequality in restricted isometry property of order k and of order 2k can hold for certain different constantsδk andδ2k,respectively.Last,we show that ifδs(k+|T^c|)<√(s-1)/s for some s≥4/3,then measurement matrix A and linear map A satisfy truncated sparse approximation property of order k.It should be pointed out that when Tc=Ф,our conclusion implies that sparse approximation property of order k is weaker than restricted isometry property of order sk.
基金supported by the National Natural Science Foundation of China (No. 61300164)
文摘As a kind of weaker supervisory information, pairwise constraints can be exploited to guide the data analysis process, such as data clustering. This paper formulates pairwise constraint propagation, which aims to predict the large quantity of unknown constraints from scarce known constraints, as a low-rank matrix recovery(LMR) problem. Although recent advances in transductive learning based on matrix completion can be directly adopted to solve this problem, our work intends to develop a more general low-rank matrix recovery solution for pairwise constraint propagation, which not only completes the unknown entries in the constraint matrix but also removes the noise from the data matrix. The problem can be effectively solved using an augmented Lagrange multiplier method. Experimental results on constrained clustering tasks based on the propagated pairwise constraints have shown that our method can obtain more stable results than state-of-the-art algorithms,and outperform them.
基金the HKRGC GRF 12306616,12200317,12300218 and 12300519,and HKU Grant 104005583.
文摘Tensor robust principal component analysis has received a substantial amount of attention in various fields.Most existing methods,normally relying on tensor nuclear norm minimization,need to pay an expensive computational cost due to multiple singular value decompositions at each iteration.To overcome the drawback,we propose a scalable and efficient method,named parallel active subspace decomposition,which divides the unfolding along each mode of the tensor into a columnwise orthonormal matrix(active subspace)and another small-size matrix in parallel.Such a transformation leads to a nonconvex optimization problem in which the scale of nuclear norm minimization is generally much smaller than that in the original problem.We solve the optimization problem by an alternating direction method of multipliers and show that the iterates can be convergent within the given stopping criterion and the convergent solution is close to the global optimum solution within the prescribed bound.Experimental results are given to demonstrate that the performance of the proposed model is better than the state-of-the-art methods.
基金This work was supported in part by the Special Funds for Major State Basic Research Projectsthe National Natural Science Foundation of China(Grants No.60372033 and 9901936)NSF CCR9901986,DMS 0311800.
文摘We present our recent work on both linear and nonlinear data reduction methods and algorithms: for the linear case we discuss results on structure analysis of SVD of columnpartitioned matrices and sparse low-rank approximation; for the nonlinear case we investigate methods for nonlinear dimensionality reduction and manifold learning. The problems we address have attracted great deal of interest in data mining and machine learning.
基金supported by the NSF of China(Grant No.12201259)supported by NSF of China(Grant No.11971208)+7 种基金supported by the NSF of China(Grant No.12201260)Jiangxi Provincial NSF(Grant No.20224BAB211008)Jiangxi Provincial NSF(Grant No.20212BAB211010)Science and Technology research project of the Education Department of Jiangxi Province(Grant No.GJJ2200537)Science and Technology Research Project of the Education Department of Jiangxi Province(Grant No.GJJ200545)NSSF of China(Grant No.21&ZD152)NSSF of China(Grant No.20BTJ008)China Postdoctoral Science Foundation(Grant No.2022M711425)。
文摘In applications involving,e.g.,panel data,images,genomics microarrays,etc.,trace regression models are useful tools.To address the high-dimensional issue of these applications,it is common to assume some sparsity property.For the case of the parameter matrix being simultaneously low rank and elements-wise sparse,we estimate the parameter matrix through the least-squares approach with the composite penalty combining the nuclear norm and the l1norm.We extend the existing analysis of the low-rank trace regression with i.i.d.errors to exponentialβ-mixing errors.The explicit convergence rate and the asymptotic properties of the proposed estimator are established.Simulations,as well as a real data application,are also carried out for illustration.
基金This work was supported by the National Basic Research 973 Program of China under Grant No. 2014CB340405, the National Key Research and Development Program of China under Grant No. 2016YFB1000902, and the National Natural Science Foundation of China under Grant Nos. 61402442, 61272177, 61173008, 61232010, 61303244, 61572469, 91646120 and 61572473.
文摘Knowledge graph embedding, which maps the entities and relations into low-dimensional vector spaces, has demonstrated its effectiveness in many tasks such as link prediction and relation extraction. Typical methods include TransE, TransH, and TransR. All these methods map different relations into the vector space separately and the intrinsic correlations of these relations are ignored. It is obvious that there exist some correlations among relations because different relations may connect to a common entity. For example, the triples (Steve Jobs, PlaceOfBrith, California) and (Apple Inc., Location, California) share the same entity California as their tail entity. We analyze the embedded relation matrices learned by TransE/TransH/TransR, and find that the correlations of relations do exist and they are showed as low-rank structure over the embedded relation matrix. It is natural to ask whether we can leverage these correlations to learn better embeddings for the entities and relations in a knowledge graph. In this paper, we propose to learn the embedded relation matrix by decomposing it as a product of two low-dimensional matrices, for characterizing the low-rank structure. The proposed method, called TransCoRe (Translation-Based Method via Modeling the Correlations of Relations), learns the embeddings of entities and relations with translation-based framework. Experimental results based on the benchmark datasets of WordNet and Freebase demonstrate that our method outperforms the typical baselines on link prediction and triple classification tasks.
基金supported by National Natural Science Foundation of China (Grant Nos.91130009, 11171299 and 11041005)National Natural Science Foundation of Zhejiang Province in China (Grant Nos. Y6090091 and Y6090641)
文摘This paper discusses conditions under which the solution of linear system with minimal Schatten-p norm, 0 〈 p ≤ 1, is also the lowest-rank solution of this linear system. To study this problem, an important tool is the restricted isometry constant (RIC). Some papers provided the upper bounds of RIC to guarantee that the nuclear-norm minimization stably recovers a low-rank matrix. For example, Fazel improved the upper bounds to δ4Ar 〈 0.558 and δ3rA 〈 0.4721, respectively. Recently, the upper bounds of RIC can be improved to δ2rA 〈 0.307. In fact, by using some methods, the upper bounds of RIC can be improved to δ2tA 〈 0.4931 and δrA 〈 0.309. In this paper, we focus on the lower bounds of RIC, we show that there exists linear maps A with δ2rA 〉1√2 or δrA 〉 1/3 for which nuclear norm recovery fail on some matrix with rank at most r. These results indicate that there is only a little limited room for improving the upper bounds for δ2rA and δrA.Furthermore, we also discuss the upper bound of restricted isometry constant associated with linear maps A for Schatten p (0 〈 p 〈 1) quasi norm minimization problem.
基金the National Natural Science Foundation of China (Grant Nos. 61433014 and 71601029).
文摘In high dimensional data, many dimensions are irrelevant to each other and clusters are usually hidden under noise. As an important extension of the traditional clustering, subspace clustering can be utilized to simultaneously cluster the high dimensional data into several subspaces and associate the low-dimensional subspaces with the corresponding points. In subspace clustering, it is a crucial step to construct an affinity matrix with block-diagonal form, in which the blocks correspond to different clusters. The distance-based methods and the representation-based methods are two major types of approaches for building an informative affinity matrix. In general, it is the difference between the density inside and outside the blocks that determines the efficiency and accuracy of the clustering. In this work, we introduce a well-known approach in statistic physics method, namely link prediction, to enhance subspace clustering by reinforcing the affinity matrix. More importantly, we introduce the idea to combine complex network theory with machine learning. By revealing the hidden links inside each block, we maximize the density of each block along the diagonal, while restrain the remaining non-blocks in the affinity matrix as sparse as possible. Our method has been shown to have a remarkably improved clustering accuracy comparing with the existing methods on well-known datasets.
基金supported in part by the Hong Kong Research Grants Council under Grant No.16210216 and in part by the Alexander von Humboldt Foundation.
文摘Hybrid precoding is a cost-effective approach to support directional transmissions for millimeter-wave(mmWave)communications,but its precoder design is highly complicated.In this paper,we propose a new hybrid precoder implementation,namely the double phase shifter(DPS)implementation,which enables highly tractable hybrid precoder design.Efficient algorithms are then developed for two popular hybrid precoder structures,i.e.,the fully-and partially-connected structures.For the fully-connected one,the RF-only precoding and hybrid precoding problems are formulated as a least absolute shrinkage and selection operator problem and a low-rank matrix approximation problem,respectively.In this way,computationally efficient algorithms are provided to approach the performance of the fully digital one with a small number of radio frequency(RF)chains.On the other hand,the hybrid precoder design in the partially-connected structure is identified as an eigenvalue problem.To enhance the performance of this cost-effective structure,dynamic mapping from RF chains to antennas is further proposed,for which a greedy algorithm and a modified K-means algorithm are developed.Simulation results demonstrate the performance gains of the proposed hybrid precoding algorithms over existing ones.It shows that,with the proposed DPS implementation,the fully-connected structure enjoys both satisfactory performance and low design complexity while the partially-connected one serves as an economic solution with low hardware complexity.