期刊文献+
共找到21篇文章
< 1 2 >
每页显示 20 50 100
A Perturbation Analysis of Low-Rank Matrix Recovery by Schatten p-Minimization
1
作者 Zhaoying Sun Huimin Wang Zhihui Zhu 《Journal of Applied Mathematics and Physics》 2024年第2期475-487,共13页
A number of previous papers have studied the problem of recovering low-rank matrices with noise, further combining the noisy and perturbed cases, we propose a nonconvex Schatten p-norm minimization method to deal with... A number of previous papers have studied the problem of recovering low-rank matrices with noise, further combining the noisy and perturbed cases, we propose a nonconvex Schatten p-norm minimization method to deal with the recovery of fully perturbed low-rank matrices. By utilizing the p-null space property (p-NSP) and the p-restricted isometry property (p-RIP) of the matrix, sufficient conditions to ensure that the stable and accurate reconstruction for low-rank matrix in the case of full perturbation are derived, and two upper bound recovery error estimation ns are given. These estimations are characterized by two vital aspects, one involving the best r-approximation error and the other concerning the overall noise. Specifically, this paper obtains two new error upper bounds based on the fact that p-RIP and p-NSP are able to recover accurately and stably low-rank matrix, and to some extent improve the conditions corresponding to RIP. 展开更多
关键词 Nonconvex Schatten p-Norm low-rank matrix Recovery p-Null Space Property the Restricted Isometry Property
下载PDF
Robust Principal Component Analysis Integrating Sparse and Low-Rank Priors
2
作者 Wei Zhai Fanlong Zhang 《Journal of Computer and Communications》 2024年第4期1-13,共13页
Principal Component Analysis (PCA) is a widely used technique for data analysis and dimensionality reduction, but its sensitivity to feature scale and outliers limits its applicability. Robust Principal Component Anal... Principal Component Analysis (PCA) is a widely used technique for data analysis and dimensionality reduction, but its sensitivity to feature scale and outliers limits its applicability. Robust Principal Component Analysis (RPCA) addresses these limitations by decomposing data into a low-rank matrix capturing the underlying structure and a sparse matrix identifying outliers, enhancing robustness against noise and outliers. This paper introduces a novel RPCA variant, Robust PCA Integrating Sparse and Low-rank Priors (RPCA-SL). Each prior targets a specific aspect of the data’s underlying structure and their combination allows for a more nuanced and accurate separation of the main data components from outliers and noise. Then RPCA-SL is solved by employing a proximal gradient algorithm for improved anomaly detection and data decomposition. Experimental results on simulation and real data demonstrate significant advancements. 展开更多
关键词 Robust Principal Component Analysis Sparse matrix low-rank matrix Hyperspectral Image
下载PDF
Proximity point algorithm for low-rank matrix recovery from sparse noise corrupted data
3
作者 朱玮 舒适 成礼智 《Applied Mathematics and Mechanics(English Edition)》 SCIE EI 2014年第2期259-268,共10页
The method of recovering a low-rank matrix with an unknown fraction whose entries are arbitrarily corrupted is known as the robust principal component analysis (RPCA). This RPCA problem, under some conditions, can b... The method of recovering a low-rank matrix with an unknown fraction whose entries are arbitrarily corrupted is known as the robust principal component analysis (RPCA). This RPCA problem, under some conditions, can be exactly solved via convex optimization by minimizing a combination of the nuclear norm and the 11 norm. In this paper, an algorithm based on the Douglas-Rachford splitting method is proposed for solving the RPCA problem. First, the convex optimization problem is solved by canceling the constraint of the variables, and ~hen the proximity operators of the objective function are computed alternately. The new algorithm can exactly recover the low-rank and sparse components simultaneously, and it is proved to be convergent. Numerical simulations demonstrate the practical utility of the proposed algorithm. 展开更多
关键词 low-rank matrix recovery sparse noise Douglas-Rachford splitting method proximity operator
下载PDF
Multidomain Correlation-Based Multidimensional CSI Tensor Generation for Device-FreeWi-Fi Sensing
4
作者 Liufeng Du Shaoru Shang +3 位作者 Linghua Zhang Chong Li JianingYang Xiyan Tian 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第2期1749-1767,共19页
Due to the fine-grained communication scenarios characterization and stability,Wi-Fi channel state information(CSI)has been increasingly applied to indoor sensing tasks recently.Although spatial variations are explici... Due to the fine-grained communication scenarios characterization and stability,Wi-Fi channel state information(CSI)has been increasingly applied to indoor sensing tasks recently.Although spatial variations are explicitlyreflected in CSI measurements,the representation differences caused by small contextual changes are easilysubmerged in the fluctuations of multipath effects,especially in device-free Wi-Fi sensing.Most existing datasolutions cannot fully exploit the temporal,spatial,and frequency information carried by CSI,which results ininsufficient sensing resolution for indoor scenario changes.As a result,the well-liked machine learning(ML)-based CSI sensing models still struggling with stable performance.This paper formulates a time-frequency matrixon the premise of demonstrating that the CSI has low-rank potential and then proposes a distributed factorizationalgorithm to effectively separate the stable structured information and context fluctuations in the CSI matrix.Finally,a multidimensional tensor is generated by combining the time-frequency gradients of CSI,which containsrich and fine-grained real-time contextual information.Extensive evaluations and case studies highlight thesuperiority of the proposal. 展开更多
关键词 Wi-Fi sensing device-free CSI low-rank matrix factorization
下载PDF
LOW-RANK MATRIX COMPLETION WITH POISSON OBSERVATIONS VIA NUCLEAR NORM AND TOTAL VARIATION CONSTRAINTS
5
作者 Duo Qiu Michael K.Ng Xiongjun Zhang 《Journal of Computational Mathematics》 SCIE CSCD 2024年第6期1427-1451,共25页
In this paper,we study the low-rank matrix completion problem with Poisson observations,where only partial entries are available and the observations are in the presence of Poisson noise.We propose a novel model compo... In this paper,we study the low-rank matrix completion problem with Poisson observations,where only partial entries are available and the observations are in the presence of Poisson noise.We propose a novel model composed of the Kullback-Leibler(KL)divergence by using the maximum likelihood estimation of Poisson noise,and total variation(TV)and nuclear norm constraints.Here the nuclear norm and TV constraints are utilized to explore the approximate low-rankness and piecewise smoothness of the underlying matrix,respectively.The advantage of these two constraints in the proposed model is that the low-rankness and piecewise smoothness of the underlying matrix can be exploited simultaneously,and they can be regularized for many real-world image data.An upper error bound of the estimator of the proposed model is established with high probability,which is not larger than that of only TV or nuclear norm constraint.To the best of our knowledge,this is the first work to utilize both low-rank and TV constraints with theoretical error bounds for matrix completion under Poisson observations.Extensive numerical examples on both synthetic data and real-world images are reported to corroborate the superiority of the proposed approach. 展开更多
关键词 low-rank matrix completion Nuclear norm Total variation Poisson observations
原文传递
Low-rank matrix recovery with total generalized variation for defending adversarial examples
6
作者 Wen LI Hengyou WANG +4 位作者 Lianzhi HUO Qiang HE Linlin CHEN Zhiquan HE Wing W.Y.Ng 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2024年第3期432-445,共14页
Low-rank matrix decomposition with first-order total variation(TV)regularization exhibits excellent performance in exploration of image structure.Taking advantage of its excellent performance in image denoising,we app... Low-rank matrix decomposition with first-order total variation(TV)regularization exhibits excellent performance in exploration of image structure.Taking advantage of its excellent performance in image denoising,we apply it to improve the robustness of deep neural networks.However,although TV regularization can improve the robustness of the model,it reduces the accuracy of normal samples due to its over-smoothing.In our work,we develop a new low-rank matrix recovery model,called LRTGV,which incorporates total generalized variation(TGV)regularization into the reweighted low-rank matrix recovery model.In the proposed model,TGV is used to better reconstruct texture information without over-smoothing.The reweighted nuclear norm and Li-norm can enhance the global structure information.Thus,the proposed LRTGV can destroy the structure of adversarial noise while re-enhancing the global structure and local texture of the image.To solve the challenging optimal model issue,we propose an algorithm based on the alternating direction method of multipliers.Experimental results show that the proposed algorithm has a certain defense capability against black-box attacks,and outperforms state-of-the-art low-rank matrix recovery methods in image restoration. 展开更多
关键词 Total generalized variation low-rank matrix Alternating direction method of multipliers Adversarial example
原文传递
Electrical Data Matrix Decomposition in Smart Grid 被引量:1
7
作者 Qian Dang Huafeng Zhang +3 位作者 Bo Zhao Yanwen He Shiming He Hye-Jin Kim 《Journal on Internet of Things》 2019年第1期1-7,共7页
As the development of smart grid and energy internet, this leads to a significantincrease in the amount of data transmitted in real time. Due to the mismatch withcommunication networks that were not designed to carry ... As the development of smart grid and energy internet, this leads to a significantincrease in the amount of data transmitted in real time. Due to the mismatch withcommunication networks that were not designed to carry high-speed and real time data,data losses and data quality degradation may happen constantly. For this problem,according to the strong spatial and temporal correlation of electricity data which isgenerated by human’s actions and feelings, we build a low-rank electricity data matrixwhere the row is time and the column is user. Inspired by matrix decomposition, we dividethe low-rank electricity data matrix into the multiply of two small matrices and use theknown data to approximate the low-rank electricity data matrix and recover the missedelectrical data. Based on the real electricity data, we analyze the low-rankness of theelectricity data matrix and perform the Matrix Decomposition-based method on the realdata. The experimental results verify the efficiency and efficiency of the proposed scheme. 展开更多
关键词 Electrical data recovery matrix decomposition low-rankness smart grid
下载PDF
Low-Rank Positive Approximants of Symmetric Matrices
8
作者 Achiya Dax 《Advances in Linear Algebra & Matrix Theory》 2014年第3期172-185,共14页
Given a symmetric matrix X, we consider the problem of finding a low-rank positive approximant of X. That is, a symmetric positive semidefinite matrix, S, whose rank is smaller than a given positive integer, , which i... Given a symmetric matrix X, we consider the problem of finding a low-rank positive approximant of X. That is, a symmetric positive semidefinite matrix, S, whose rank is smaller than a given positive integer, , which is nearest to X in a certain matrix norm. The problem is first solved with regard to four common norms: The Frobenius norm, the Schatten p-norm, the trace norm, and the spectral norm. Then the solution is extended to any unitarily invariant matrix norm. The proof is based on a subtle combination of Ky Fan dominance theorem, a modified pinching principle, and Mirsky minimum-norm theorem. 展开更多
关键词 low-rank POSITIVE APPROXIMANTS Unitarily INVARIANT matrix Norms
下载PDF
New low-rank optimization model and algorithms for spectral compressed sensing
9
作者 Zai Yang Xunmeng Wu Zongben Xu 《Science China Mathematics》 SCIE CSCD 2024年第10期2409-2432,共24页
In this paper, we investigate the recovery of an undamped spectrally sparse signal and its spectral components from a set of regularly spaced samples within the framework of spectral compressed sensing and super-resol... In this paper, we investigate the recovery of an undamped spectrally sparse signal and its spectral components from a set of regularly spaced samples within the framework of spectral compressed sensing and super-resolution. We show that the existing Hankel-based optimization methods suffer from the fundamental limitation that the prior knowledge of undampedness cannot be exploited. We propose a new low-rank optimization model partially inspired by forward-backward processing for line spectral estimation and show its capability to restrict the spectral poles to the unit circle. We present convex relaxation approaches with the model and show their provable accuracy and robustness to bounded and sparse noise. All our results are generalized from one-dimensional to arbitrary-dimensional spectral compressed sensing. Numerical simulations are provided to corroborate our analysis and show the efficiency of our model and the advantageous performance of our approach in terms of accuracy and resolution compared with the state-of-the-art Hankel and atomic norm methods. 展开更多
关键词 low-rank double Hankel model doubly enhanced matrix completion line spectral estimation spectral compressed sensing Kronecker's theorem
原文传递
Stable recovery of low-rank matrix via nonconvex Schatten p-minimization 被引量:3
10
作者 CHEN WenGu LI YaLing 《Science China Mathematics》 SCIE CSCD 2015年第12期2643-2654,共12页
In this paper, a sufficient condition is obtained to ensure the stable recovery(ε≠ 0) or exact recovery(ε = 0) of all r-rank matrices X ∈ Rm×nfrom b = A(X) + z via nonconvex Schatten p-minimization for anyδ4... In this paper, a sufficient condition is obtained to ensure the stable recovery(ε≠ 0) or exact recovery(ε = 0) of all r-rank matrices X ∈ Rm×nfrom b = A(X) + z via nonconvex Schatten p-minimization for anyδ4r∈ [3~(1/2))2, 1). Moreover, we determine the range of parameter p with any given δ4r∈ [(3~(1/2))/22, 1). In fact, for any given δ4r∈ [3~(1/2))2, 1), p ∈(0, 2(1- δ4r)] suffices for the stable recovery or exact recovery of all r-rank matrices. 展开更多
关键词 low-rank matrix recovery restricted isometry constant Schatten p-minimization
原文传递
Sparse and Low-Rank Covariance Matrix Estimation 被引量:2
11
作者 Sheng-Long Zhou Nai-Hua Xiu +1 位作者 Zi-Yan Luo Ling-Chen Kong 《Journal of the Operations Research Society of China》 EI CSCD 2015年第2期231-250,共20页
This paper aims at achieving a simultaneously sparse and low-rank estimator from the semidefinite population covariance matrices.We first benefit from a convex optimization which develops l1-norm penalty to encourage ... This paper aims at achieving a simultaneously sparse and low-rank estimator from the semidefinite population covariance matrices.We first benefit from a convex optimization which develops l1-norm penalty to encourage the sparsity and nuclear norm to favor the low-rank property.For the proposed estimator,we then prove that with high probability,the Frobenius norm of the estimation rate can be of order O(√((slgg p)/n))under a mild case,where s and p denote the number of nonzero entries and the dimension of the population covariance,respectively and n notes the sample capacity.Finally,an efficient alternating direction method of multipliers with global convergence is proposed to tackle this problem,and merits of the approach are also illustrated by practicing numerical simulations. 展开更多
关键词 Covariance matrix Sparse and low-rank estimator Estimation rate Alternating direction method of multipliers
原文传递
Truncated sparse approximation property and truncated q-norm minimization 被引量:1
12
作者 CHEN Wen-gu LI Peng 《Applied Mathematics(A Journal of Chinese Universities)》 SCIE CSCD 2019年第3期261-283,共23页
This paper considers approximately sparse signal and low-rank matrix’s recovery via truncated norm minimization minx∥xT∥q and minX∥XT∥Sq from noisy measurements.We first introduce truncated sparse approximation p... This paper considers approximately sparse signal and low-rank matrix’s recovery via truncated norm minimization minx∥xT∥q and minX∥XT∥Sq from noisy measurements.We first introduce truncated sparse approximation property,a more general robust null space property,and establish the stable recovery of signals and matrices under the truncated sparse approximation property.We also explore the relationship between the restricted isometry property and truncated sparse approximation property.And we also prove that if a measurement matrix A or linear map A satisfies truncated sparse approximation property of order k,then the first inequality in restricted isometry property of order k and of order 2k can hold for certain different constantsδk andδ2k,respectively.Last,we show that ifδs(k+|T^c|)<√(s-1)/s for some s≥4/3,then measurement matrix A and linear map A satisfy truncated sparse approximation property of order k.It should be pointed out that when Tc=Ф,our conclusion implies that sparse approximation property of order k is weaker than restricted isometry property of order sk. 展开更多
关键词 TRUNCATED NORM MINIMIZATION TRUNCATED SPARSE approximation PROPERTY restricted isometry PROPERTY SPARSE signal RECOVERY low-rank matrix RECOVERY Dantzig selector
下载PDF
Pairwise constraint propagation via low-rank matrix recovery
13
作者 Zhenyong Fu 《Computational Visual Media》 2015年第3期211-220,共10页
As a kind of weaker supervisory information, pairwise constraints can be exploited to guide the data analysis process, such as data clustering. This paper formulates pairwise constraint propagation, which aims to pred... As a kind of weaker supervisory information, pairwise constraints can be exploited to guide the data analysis process, such as data clustering. This paper formulates pairwise constraint propagation, which aims to predict the large quantity of unknown constraints from scarce known constraints, as a low-rank matrix recovery(LMR) problem. Although recent advances in transductive learning based on matrix completion can be directly adopted to solve this problem, our work intends to develop a more general low-rank matrix recovery solution for pairwise constraint propagation, which not only completes the unknown entries in the constraint matrix but also removes the noise from the data matrix. The problem can be effectively solved using an augmented Lagrange multiplier method. Experimental results on constrained clustering tasks based on the propagated pairwise constraints have shown that our method can obtain more stable results than state-of-the-art algorithms,and outperform them. 展开更多
关键词 semi-supervised learning pairwise constraint propagation low-rank matrix recovery(LMR) constrained clustering matrix completion
原文传递
Parallel Active Subspace Decomposition for Tensor Robust Principal Component Analysis
14
作者 Michael K.Ng Xue-Zhong Wang 《Communications on Applied Mathematics and Computation》 2021年第2期221-241,共21页
Tensor robust principal component analysis has received a substantial amount of attention in various fields.Most existing methods,normally relying on tensor nuclear norm minimization,need to pay an expensive computati... Tensor robust principal component analysis has received a substantial amount of attention in various fields.Most existing methods,normally relying on tensor nuclear norm minimization,need to pay an expensive computational cost due to multiple singular value decompositions at each iteration.To overcome the drawback,we propose a scalable and efficient method,named parallel active subspace decomposition,which divides the unfolding along each mode of the tensor into a columnwise orthonormal matrix(active subspace)and another small-size matrix in parallel.Such a transformation leads to a nonconvex optimization problem in which the scale of nuclear norm minimization is generally much smaller than that in the original problem.We solve the optimization problem by an alternating direction method of multipliers and show that the iterates can be convergent within the given stopping criterion and the convergent solution is close to the global optimum solution within the prescribed bound.Experimental results are given to demonstrate that the performance of the proposed model is better than the state-of-the-art methods. 展开更多
关键词 Principal component analysis low-rank tensors Nuclear norm minimization Active subspace decomposition matrix factorization
下载PDF
Linear low-rank approximation and nonlinear dimensionality reduction 被引量:2
15
作者 ZHANG Zhenyue & ZHA Hongyuan Department of Mathematics, Zhejiang University, Yuquan Campus, Hangzhou 310027, China Department of Computer Science and Engineering, The Pennsylvania State University, University Park, PA 16802, U.S.A. 《Science China Mathematics》 SCIE 2004年第6期908-920,共13页
We present our recent work on both linear and nonlinear data reduction methods and algorithms: for the linear case we discuss results on structure analysis of SVD of columnpartitioned matrices and sparse low-rank appr... We present our recent work on both linear and nonlinear data reduction methods and algorithms: for the linear case we discuss results on structure analysis of SVD of columnpartitioned matrices and sparse low-rank approximation; for the nonlinear case we investigate methods for nonlinear dimensionality reduction and manifold learning. The problems we address have attracted great deal of interest in data mining and machine learning. 展开更多
关键词 singular value decomposition low-rank approximation sparse matrix nonlinear dimensionality reduction principal manifold subspace alignment data mining
原文传递
Oracle Inequality for Sparse Trace Regression Models with Exponentialβ-mixing Errors
16
作者 Ling PENG Xiang Yong TAN +2 位作者 Pei Wen XIAO Zeinab RIZK Xiao Hui LIU 《Acta Mathematica Sinica,English Series》 SCIE CSCD 2023年第10期2031-2053,共23页
In applications involving,e.g.,panel data,images,genomics microarrays,etc.,trace regression models are useful tools.To address the high-dimensional issue of these applications,it is common to assume some sparsity prop... In applications involving,e.g.,panel data,images,genomics microarrays,etc.,trace regression models are useful tools.To address the high-dimensional issue of these applications,it is common to assume some sparsity property.For the case of the parameter matrix being simultaneously low rank and elements-wise sparse,we estimate the parameter matrix through the least-squares approach with the composite penalty combining the nuclear norm and the l1norm.We extend the existing analysis of the low-rank trace regression with i.i.d.errors to exponentialβ-mixing errors.The explicit convergence rate and the asymptotic properties of the proposed estimator are established.Simulations,as well as a real data application,are also carried out for illustration. 展开更多
关键词 Trace regression model low-rank matrix oracle inequality exponentialβ-mixing errors
原文传递
Modeling the Correlations of Relations for Knowledge Graph Embedding 被引量:8
17
作者 Ji-Zhao Zhu Yan-Tao Jia +2 位作者 Jun Xu Jian-Zhong Qiao Xue-Qi Cheng 《Journal of Computer Science & Technology》 SCIE EI CSCD 2018年第2期323-334,共12页
Knowledge graph embedding, which maps the entities and relations into low-dimensional vector spaces, has demonstrated its effectiveness in many tasks such as link prediction and relation extraction. Typical methods in... Knowledge graph embedding, which maps the entities and relations into low-dimensional vector spaces, has demonstrated its effectiveness in many tasks such as link prediction and relation extraction. Typical methods include TransE, TransH, and TransR. All these methods map different relations into the vector space separately and the intrinsic correlations of these relations are ignored. It is obvious that there exist some correlations among relations because different relations may connect to a common entity. For example, the triples (Steve Jobs, PlaceOfBrith, California) and (Apple Inc., Location, California) share the same entity California as their tail entity. We analyze the embedded relation matrices learned by TransE/TransH/TransR, and find that the correlations of relations do exist and they are showed as low-rank structure over the embedded relation matrix. It is natural to ask whether we can leverage these correlations to learn better embeddings for the entities and relations in a knowledge graph. In this paper, we propose to learn the embedded relation matrix by decomposing it as a product of two low-dimensional matrices, for characterizing the low-rank structure. The proposed method, called TransCoRe (Translation-Based Method via Modeling the Correlations of Relations), learns the embeddings of entities and relations with translation-based framework. Experimental results based on the benchmark datasets of WordNet and Freebase demonstrate that our method outperforms the typical baselines on link prediction and triple classification tasks. 展开更多
关键词 knowledge graph embedding low-rank matrix decomposition
原文传递
The bounds of restricted isometry constants for low rank matrices recovery 被引量:6
18
作者 WANG HuiMin LI Song 《Science China Mathematics》 SCIE 2013年第6期1117-1127,共11页
This paper discusses conditions under which the solution of linear system with minimal Schatten-p norm, 0 〈 p ≤ 1, is also the lowest-rank solution of this linear system. To study this problem, an important tool is ... This paper discusses conditions under which the solution of linear system with minimal Schatten-p norm, 0 〈 p ≤ 1, is also the lowest-rank solution of this linear system. To study this problem, an important tool is the restricted isometry constant (RIC). Some papers provided the upper bounds of RIC to guarantee that the nuclear-norm minimization stably recovers a low-rank matrix. For example, Fazel improved the upper bounds to δ4Ar 〈 0.558 and δ3rA 〈 0.4721, respectively. Recently, the upper bounds of RIC can be improved to δ2rA 〈 0.307. In fact, by using some methods, the upper bounds of RIC can be improved to δ2tA 〈 0.4931 and δrA 〈 0.309. In this paper, we focus on the lower bounds of RIC, we show that there exists linear maps A with δ2rA 〉1√2 or δrA 〉 1/3 for which nuclear norm recovery fail on some matrix with rank at most r. These results indicate that there is only a little limited room for improving the upper bounds for δ2rA and δrA.Furthermore, we also discuss the upper bound of restricted isometry constant associated with linear maps A for Schatten p (0 〈 p 〈 1) quasi norm minimization problem. 展开更多
关键词 restricted isometry constants low-rank matrix recovery Schatten-p norm nuclear norm com-pressed sensing convex optimization
原文传递
Enhancing subspace clustering based on dynamic prediction 被引量:1
19
作者 Ratha PECH Dong HAO +1 位作者 Hong CHENG Tao ZHOU 《Frontiers of Computer Science》 SCIE EI CSCD 2019年第4期802-812,共11页
In high dimensional data, many dimensions are irrelevant to each other and clusters are usually hidden under noise. As an important extension of the traditional clustering, subspace clustering can be utilized to simul... In high dimensional data, many dimensions are irrelevant to each other and clusters are usually hidden under noise. As an important extension of the traditional clustering, subspace clustering can be utilized to simultaneously cluster the high dimensional data into several subspaces and associate the low-dimensional subspaces with the corresponding points. In subspace clustering, it is a crucial step to construct an affinity matrix with block-diagonal form, in which the blocks correspond to different clusters. The distance-based methods and the representation-based methods are two major types of approaches for building an informative affinity matrix. In general, it is the difference between the density inside and outside the blocks that determines the efficiency and accuracy of the clustering. In this work, we introduce a well-known approach in statistic physics method, namely link prediction, to enhance subspace clustering by reinforcing the affinity matrix. More importantly, we introduce the idea to combine complex network theory with machine learning. By revealing the hidden links inside each block, we maximize the density of each block along the diagonal, while restrain the remaining non-blocks in the affinity matrix as sparse as possible. Our method has been shown to have a remarkably improved clustering accuracy comparing with the existing methods on well-known datasets. 展开更多
关键词 SUBSPACE clustering LINK prediction blockdiagonal matrix low-rank REPRESENTATION SPARSE representation.
原文传递
Doubling Phase Shifters for Efficient Hybrid Precoder Design in Millimeter-Wave Communication Systems 被引量:1
20
作者 Xianghao Yu Jun Zhang Khaled B.Letaief 《Journal of Communications and Information Networks》 CSCD 2019年第2期51-67,共17页
Hybrid precoding is a cost-effective approach to support directional transmissions for millimeter-wave(mmWave)communications,but its precoder design is highly complicated.In this paper,we propose a new hybrid precoder... Hybrid precoding is a cost-effective approach to support directional transmissions for millimeter-wave(mmWave)communications,but its precoder design is highly complicated.In this paper,we propose a new hybrid precoder implementation,namely the double phase shifter(DPS)implementation,which enables highly tractable hybrid precoder design.Efficient algorithms are then developed for two popular hybrid precoder structures,i.e.,the fully-and partially-connected structures.For the fully-connected one,the RF-only precoding and hybrid precoding problems are formulated as a least absolute shrinkage and selection operator problem and a low-rank matrix approximation problem,respectively.In this way,computationally efficient algorithms are provided to approach the performance of the fully digital one with a small number of radio frequency(RF)chains.On the other hand,the hybrid precoder design in the partially-connected structure is identified as an eigenvalue problem.To enhance the performance of this cost-effective structure,dynamic mapping from RF chains to antennas is further proposed,for which a greedy algorithm and a modified K-means algorithm are developed.Simulation results demonstrate the performance gains of the proposed hybrid precoding algorithms over existing ones.It shows that,with the proposed DPS implementation,the fully-connected structure enjoys both satisfactory performance and low design complexity while the partially-connected one serves as an economic solution with low hardware complexity. 展开更多
关键词 5G networks hybrid precoding low-rank matrix approximation millimeter-wave communications multiple-input multiple-output(MIMO) OFDM
原文传递
上一页 1 2 下一页 到第
使用帮助 返回顶部