期刊文献+
共找到1,144篇文章
< 1 2 58 >
每页显示 20 50 100
Robust Latent Factor Analysis for Precise Representation of High-Dimensional and Sparse Data 被引量:5
1
作者 Di Wu Xin Luo 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2021年第4期796-805,共10页
High-dimensional and sparse(HiDS)matrices commonly arise in various industrial applications,e.g.,recommender systems(RSs),social networks,and wireless sensor networks.Since they contain rich information,how to accurat... High-dimensional and sparse(HiDS)matrices commonly arise in various industrial applications,e.g.,recommender systems(RSs),social networks,and wireless sensor networks.Since they contain rich information,how to accurately represent them is of great significance.A latent factor(LF)model is one of the most popular and successful ways to address this issue.Current LF models mostly adopt L2-norm-oriented Loss to represent an HiDS matrix,i.e.,they sum the errors between observed data and predicted ones with L2-norm.Yet L2-norm is sensitive to outlier data.Unfortunately,outlier data usually exist in such matrices.For example,an HiDS matrix from RSs commonly contains many outlier ratings due to some heedless/malicious users.To address this issue,this work proposes a smooth L1-norm-oriented latent factor(SL-LF)model.Its main idea is to adopt smooth L1-norm rather than L2-norm to form its Loss,making it have both strong robustness and high accuracy in predicting the missing data of an HiDS matrix.Experimental results on eight HiDS matrices generated by industrial applications verify that the proposed SL-LF model not only is robust to the outlier data but also has significantly higher prediction accuracy than state-of-the-art models when they are used to predict the missing data of HiDS matrices. 展开更多
关键词 high-dimensional and sparse matrix L1-norm L2 norm latent factor model recommender system smooth L1-norm
下载PDF
Randomized Latent Factor Model for High-dimensional and Sparse Matrices from Industrial Applications 被引量:13
2
作者 Mingsheng Shang Xin Luo +3 位作者 Zhigang Liu Jia Chen Ye Yuan MengChu Zhou 《IEEE/CAA Journal of Automatica Sinica》 EI CSCD 2019年第1期131-141,共11页
Latent factor(LF) models are highly effective in extracting useful knowledge from High-Dimensional and Sparse(HiDS) matrices which are commonly seen in various industrial applications. An LF model usually adopts itera... Latent factor(LF) models are highly effective in extracting useful knowledge from High-Dimensional and Sparse(HiDS) matrices which are commonly seen in various industrial applications. An LF model usually adopts iterative optimizers,which may consume many iterations to achieve a local optima,resulting in considerable time cost. Hence, determining how to accelerate the training process for LF models has become a significant issue. To address this, this work proposes a randomized latent factor(RLF) model. It incorporates the principle of randomized learning techniques from neural networks into the LF analysis of HiDS matrices, thereby greatly alleviating computational burden. It also extends a standard learning process for randomized neural networks in context of LF analysis to make the resulting model represent an HiDS matrix correctly.Experimental results on three HiDS matrices from industrial applications demonstrate that compared with state-of-the-art LF models, RLF is able to achieve significantly higher computational efficiency and comparable prediction accuracy for missing data.I provides an important alternative approach to LF analysis of HiDS matrices, which is especially desired for industrial applications demanding highly efficient models. 展开更多
关键词 Big data high-dimensional and sparse matrix latent factor analysis latent factor model randomized learning
下载PDF
A method based on vector type for sparse storage and quick access to projection matrix
3
作者 杨娟 侯慧玲 石浪 《Journal of Measurement Science and Instrumentation》 CAS CSCD 2015年第1期53-56,共4页
For sparse storage and quick access to projection matrix based on vector type, this paper proposes a method to solve the problems of the repetitive computation of projection coefficient, the large space occupation and... For sparse storage and quick access to projection matrix based on vector type, this paper proposes a method to solve the problems of the repetitive computation of projection coefficient, the large space occupation and low retrieval efficiency of projection matrix in iterative reconstruction algorithms, which calculates only once the projection coefficient and stores the data sparsely in binary format based on the variable size of library vector type. In the iterative reconstruction process, these binary files are accessed iteratively and the vector type is used to quickly obtain projection coefficients of each ray. The results of the experiments show that the method reduces the memory space occupation of the projection matrix and the computation of projection coefficient in iterative process, and accelerates the reconstruction speed. 展开更多
关键词 projection matrix sparse storage quick access vector type
下载PDF
PERFORMANCE OF SIMPLE-ENCODING IRREGULAR LDPC CODES BASED ON SPARSE GENERATOR MATRIX
4
作者 唐蕾 仰枫帆 《Transactions of Nanjing University of Aeronautics and Astronautics》 EI 2006年第3期202-207,共6页
A new method for the construction of the high performance systematic irregular low-density paritycheck (LDPC) codes based on the sparse generator matrix (G-LDPC) is introduced. The code can greatly reduce the enco... A new method for the construction of the high performance systematic irregular low-density paritycheck (LDPC) codes based on the sparse generator matrix (G-LDPC) is introduced. The code can greatly reduce the encoding complexity while maintaining the same decoding complexity as traditional regular LDPC (H-LDPC) codes defined by the sparse parity check matrix. Simulation results show that the performance of the proposed irregular LDPC codes can offer significant gains over traditional LDPC codes in low SNRs with a few decoding iterations over an additive white Gaussian noise (AWGN) channel. 展开更多
关键词 belief propagation iterative decoding algorithm sparse parity-check matrix sparse generator matrix H LDPC codes G-LDPC codes
下载PDF
A SPARSE MATRIX TECHNIQUE FOR SIMULATING SEMICONDUCTOR DEVICES AND ITS ALGORITHMS 被引量:2
5
作者 任建民 张义门 《Journal of Electronics(China)》 1990年第1期77-82,共6页
A novel sparse matrix technique for the numerical analysis of semiconductor devicesand its algorithms are presented.Storage scheme and calculation procedure of the sparse matrixare described in detail.The sparse matri... A novel sparse matrix technique for the numerical analysis of semiconductor devicesand its algorithms are presented.Storage scheme and calculation procedure of the sparse matrixare described in detail.The sparse matrix technique in the device simulation can decrease storagegreatly with less CPU time and its implementation is very easy.Some algorithms and calculationexamples to show the time and space characteristics of the sparse matrix are given. 展开更多
关键词 SEMICONDUCTOR devices sparse matrix TECHNIQUE Algorithm CAD
下载PDF
Improved Variable Forgetting Factor Proportionate RLS Algorithm with Sparse Penalty and Fast Implementation Using DCD Iterations
6
作者 Han Zhen Zhang Fengrui +2 位作者 Zhang Yu Han Yanfeng Jiang Peng 《China Communications》 SCIE CSCD 2024年第10期16-27,共12页
The proportionate recursive least squares(PRLS)algorithm has shown faster convergence and better performance than both proportionate updating(PU)mechanism based least mean squares(LMS)algorithms and RLS algorithms wit... The proportionate recursive least squares(PRLS)algorithm has shown faster convergence and better performance than both proportionate updating(PU)mechanism based least mean squares(LMS)algorithms and RLS algorithms with a sparse regularization term.In this paper,we propose a variable forgetting factor(VFF)PRLS algorithm with a sparse penalty,e.g.,l_(1)-norm,for sparse identification.To reduce the computation complexity of the proposed algorithm,a fast implementation method based on dichotomous coordinate descent(DCD)algorithm is also derived.Simulation results indicate superior performance of the proposed algorithm. 展开更多
关键词 dichotomous coordinate descent proportionate matrix RLS sparse systems variable forgetting factor
下载PDF
Applying Analytical Derivative and Sparse Matrix Techniques to Large-Scale Process Optimization Problems 被引量:2
7
作者 仲卫涛 邵之江 +1 位作者 张余岳 钱积新 《Chinese Journal of Chemical Engineering》 SCIE EI CAS CSCD 2000年第3期212-217,共6页
The performance of analytical derivative and sparse matrix techniques applied to a traditional dense sequential quadratic programming (SQP) is studied, and the strategy utilizing those techniques is also presented.Com... The performance of analytical derivative and sparse matrix techniques applied to a traditional dense sequential quadratic programming (SQP) is studied, and the strategy utilizing those techniques is also presented.Computational results on two typical chemical optimization problems demonstrate significant enhancement in efficiency, which shows this strategy is promising and suitable for large-scale process optimization problems. 展开更多
关键词 large-scale optimization open-equation sequential quadratic programming analytical derivative sparse matrix technique
下载PDF
Optimal Estimation of High-Dimensional Covariance Matrices with Missing and Noisy Data
8
作者 Meiyin Wang Wanzhou Ye 《Advances in Pure Mathematics》 2024年第4期214-227,共14页
The estimation of covariance matrices is very important in many fields, such as statistics. In real applications, data are frequently influenced by high dimensions and noise. However, most relevant studies are based o... The estimation of covariance matrices is very important in many fields, such as statistics. In real applications, data are frequently influenced by high dimensions and noise. However, most relevant studies are based on complete data. This paper studies the optimal estimation of high-dimensional covariance matrices based on missing and noisy sample under the norm. First, the model with sub-Gaussian additive noise is presented. The generalized sample covariance is then modified to define a hard thresholding estimator , and the minimax upper bound is derived. After that, the minimax lower bound is derived, and it is concluded that the estimator presented in this article is rate-optimal. Finally, numerical simulation analysis is performed. The result shows that for missing samples with sub-Gaussian noise, if the true covariance matrix is sparse, the hard thresholding estimator outperforms the traditional estimate method. 展开更多
关键词 high-dimensional Covariance matrix Missing Data Sub-Gaussian Noise Optimal Estimation
下载PDF
Bounds for Polynomial’s Roots from Fiedler and Sparse Companion Matrices for Submultiplicative Matrix Norms 被引量:1
9
作者 Mamoudou Amadou Bondabou Ousmane Moussa Tessa Amidou Morou 《Advances in Linear Algebra & Matrix Theory》 2021年第1期1-13,共13页
We use submultiplicative companion matrix norms to provide new bounds for roots for a given polynomial <i>P</i>(<i>X</i>) over the field C[<i>X</i>]. From a <i>n</i>... We use submultiplicative companion matrix norms to provide new bounds for roots for a given polynomial <i>P</i>(<i>X</i>) over the field C[<i>X</i>]. From a <i>n</i>×<i>n</i> Fiedler companion matrix <i>C</i>, sparse companion matrices and triangular Hessenberg matrices are introduced. Then, we identify a special triangular Hessenberg matrix <i>L<sub>r</sub></i>, supposed to provide a good estimation of the roots. By application of Gershgorin’s theorems to this special matrix in case of submultiplicative matrix norms, some estimations of bounds for roots are made. The obtained bounds have been compared to known ones from the literature precisely Cauchy’s bounds, Montel’s bounds and Carmichel-Mason’s bounds. According to the starting formel of <i>L<sub>r</sub></i>, we see that the more we have coefficients closed to zero with a norm less than 1, the more the Sparse method is useful. 展开更多
关键词 Fiedler Matrices Polynomial’s Roots Bounds for Polynomials Companion Matrices sparse Companion Matrices Hessenberg Matrices Submultiplicative matrix Norm
下载PDF
Robust Principal Component Analysis Integrating Sparse and Low-Rank Priors
10
作者 Wei Zhai Fanlong Zhang 《Journal of Computer and Communications》 2024年第4期1-13,共13页
Principal Component Analysis (PCA) is a widely used technique for data analysis and dimensionality reduction, but its sensitivity to feature scale and outliers limits its applicability. Robust Principal Component Anal... Principal Component Analysis (PCA) is a widely used technique for data analysis and dimensionality reduction, but its sensitivity to feature scale and outliers limits its applicability. Robust Principal Component Analysis (RPCA) addresses these limitations by decomposing data into a low-rank matrix capturing the underlying structure and a sparse matrix identifying outliers, enhancing robustness against noise and outliers. This paper introduces a novel RPCA variant, Robust PCA Integrating Sparse and Low-rank Priors (RPCA-SL). Each prior targets a specific aspect of the data’s underlying structure and their combination allows for a more nuanced and accurate separation of the main data components from outliers and noise. Then RPCA-SL is solved by employing a proximal gradient algorithm for improved anomaly detection and data decomposition. Experimental results on simulation and real data demonstrate significant advancements. 展开更多
关键词 Robust Principal Component Analysis sparse matrix Low-Rank matrix Hyperspectral Image
下载PDF
Performance Prediction Based on Statistics of Sparse Matrix-Vector Multiplication on GPUs 被引量:1
11
作者 Ruixing Wang Tongxiang Gu Ming Li 《Journal of Computer and Communications》 2017年第6期65-83,共19页
As one of the most essential and important operations in linear algebra, the performance prediction of sparse matrix-vector multiplication (SpMV) on GPUs has got more and more attention in recent years. In 2012, Guo a... As one of the most essential and important operations in linear algebra, the performance prediction of sparse matrix-vector multiplication (SpMV) on GPUs has got more and more attention in recent years. In 2012, Guo and Wang put forward a new idea to predict the performance of SpMV on GPUs. However, they didn’t consider the matrix structure completely, so the execution time predicted by their model tends to be inaccurate for general sparse matrix. To address this problem, we proposed two new similar models, which take into account the structure of the matrices and make the performance prediction model more accurate. In addition, we predict the execution time of SpMV for CSR-V, CSR-S, ELL and JAD sparse matrix storage formats by the new models on the CUDA platform. Our experimental results show that the accuracy of prediction by our models is 1.69 times better than Guo and Wang’s model on average for most general matrices. 展开更多
关键词 sparse matrix-Vector MULTIPLICATION Performance Prediction GPU Normal DISTRIBUTION UNIFORM DISTRIBUTION
下载PDF
Proximity point algorithm for low-rank matrix recovery from sparse noise corrupted data
12
作者 朱玮 舒适 成礼智 《Applied Mathematics and Mechanics(English Edition)》 SCIE EI 2014年第2期259-268,共10页
The method of recovering a low-rank matrix with an unknown fraction whose entries are arbitrarily corrupted is known as the robust principal component analysis (RPCA). This RPCA problem, under some conditions, can b... The method of recovering a low-rank matrix with an unknown fraction whose entries are arbitrarily corrupted is known as the robust principal component analysis (RPCA). This RPCA problem, under some conditions, can be exactly solved via convex optimization by minimizing a combination of the nuclear norm and the 11 norm. In this paper, an algorithm based on the Douglas-Rachford splitting method is proposed for solving the RPCA problem. First, the convex optimization problem is solved by canceling the constraint of the variables, and ~hen the proximity operators of the objective function are computed alternately. The new algorithm can exactly recover the low-rank and sparse components simultaneously, and it is proved to be convergent. Numerical simulations demonstrate the practical utility of the proposed algorithm. 展开更多
关键词 low-rank matrix recovery sparse noise Douglas-Rachford splitting method proximity operator
下载PDF
Alzheimer’s disease classification based on sparse functional connectivity and non-negative matrix factorization
13
作者 Li Xuan Lu Xuesong Wang Haixian 《Journal of Southeast University(English Edition)》 EI CAS 2019年第2期147-152,共6页
A novel framework is proposed to obtain physiologically meaningful features for Alzheimer's disease(AD)classification based on sparse functional connectivity and non-negative matrix factorization.Specifically,the ... A novel framework is proposed to obtain physiologically meaningful features for Alzheimer's disease(AD)classification based on sparse functional connectivity and non-negative matrix factorization.Specifically,the non-negative adaptive sparse representation(NASR)method is applied to compute the sparse functional connectivity among brain regions based on functional magnetic resonance imaging(fMRI)data for feature extraction.Afterwards,the sparse non-negative matrix factorization(sNMF)method is adopted for dimensionality reduction to obtain low-dimensional features with straightforward physical meaning.The experimental results show that the proposed framework outperforms the competing frameworks in terms of classification accuracy,sensitivity and specificity.Furthermore,three sub-networks,including the default mode network,the basal ganglia-thalamus-limbic network and the temporal-insular network,are found to have notable differences between the AD patients and the healthy subjects.The proposed framework can effectively identify AD patients and has potentials for extending the understanding of the pathological changes of AD. 展开更多
关键词 Alzheimer's disease sparse representation non-negative matrix factorization functional connectivity
下载PDF
Cache performance optimization of irregular sparse matrix multiplication on modern multi-core CPU and GPU
14
作者 刘力 LiuLi Yang Guang wen 《High Technology Letters》 EI CAS 2013年第4期339-345,共7页
This paper focuses on how to optimize the cache performance of sparse matrix-matrix multiplication(SpGEMM).It classifies the cache misses into two categories;one is caused by the irregular distribution pattern of the ... This paper focuses on how to optimize the cache performance of sparse matrix-matrix multiplication(SpGEMM).It classifies the cache misses into two categories;one is caused by the irregular distribution pattern of the multiplier-matrix,and the other is caused by the multiplicand.For each of them,the paper puts forward an optimization method respectively.The first hash based method removes cache misses of the 1 st category effectively,and improves the performance by a factor of 6 on an Intel 8-core CPU for the best cases.For cache misses of the 2nd category,it proposes a new cache replacement algorithm,which achieves a cache hit rate much higher than other historical knowledge based algorithms,and the algorithm is applicable on CELL and GPU.To further verify the effectiveness of our methods,we implement our algorithm on GPU,and the performance perfectly scales with the size of on-chip storage. 展开更多
关键词 sparse matrix multiplication cache miss SCALABILITY multi-core CPU GPU
下载PDF
Truncated sparse approximation property and truncated q-norm minimization 被引量:1
15
作者 CHEN Wen-gu LI Peng 《Applied Mathematics(A Journal of Chinese Universities)》 SCIE CSCD 2019年第3期261-283,共23页
This paper considers approximately sparse signal and low-rank matrix’s recovery via truncated norm minimization minx∥xT∥q and minX∥XT∥Sq from noisy measurements.We first introduce truncated sparse approximation p... This paper considers approximately sparse signal and low-rank matrix’s recovery via truncated norm minimization minx∥xT∥q and minX∥XT∥Sq from noisy measurements.We first introduce truncated sparse approximation property,a more general robust null space property,and establish the stable recovery of signals and matrices under the truncated sparse approximation property.We also explore the relationship between the restricted isometry property and truncated sparse approximation property.And we also prove that if a measurement matrix A or linear map A satisfies truncated sparse approximation property of order k,then the first inequality in restricted isometry property of order k and of order 2k can hold for certain different constantsδk andδ2k,respectively.Last,we show that ifδs(k+|T^c|)<√(s-1)/s for some s≥4/3,then measurement matrix A and linear map A satisfy truncated sparse approximation property of order k.It should be pointed out that when Tc=Ф,our conclusion implies that sparse approximation property of order k is weaker than restricted isometry property of order sk. 展开更多
关键词 TRUNCATED NORM MINIMIZATION TRUNCATED sparse approximation PROPERTY restricted isometry PROPERTY sparse signal RECOVERY low-rank matrix RECOVERY Dantzig selector
下载PDF
A NEW HIGH PERFORMANCE SPARSE STATIC SOLVER IN FINITE ELEMENT ANALYSIS WITH LOOP-UNROLLING 被引量:1
16
作者 Chen Pu Sun Shuli 《Acta Mechanica Solida Sinica》 SCIE EI 2005年第3期248-255,共8页
In the previous papers, a high performance sparse static solver with two-level unrolling based on a cell-sparse storage scheme was reported. Although the solver reaches quite a high efficiency for a big percentage of ... In the previous papers, a high performance sparse static solver with two-level unrolling based on a cell-sparse storage scheme was reported. Although the solver reaches quite a high efficiency for a big percentage of finite element analysis benchmark tests, the MFLOPS (million floating operations per second) of LDL^T factorization of benchmark tests vary on a Dell Pentium IV 850 MHz machine from 100 to 456 depending on the average size of the super-equations, i.e., on the average depth of unrolling. In this paper, a new sparse static solver with two-level unrolling that employs the concept of master-equations and searches for an appropriate depths of unrolling is proposed. The new solver provides higher MFLOPS for LDL^T factorization of benchmark tests, and therefore speeds up the solution process. 展开更多
关键词 high performance computing sparse matrix finite element analysis
下载PDF
Slope reliability analysis based on Monte Carlo simulation and sparse grid method 被引量:2
17
作者 WU Guoxue PENG Yijin +2 位作者 LIU Xuesong HU Tao WU Hao 《Global Geology》 2019年第3期152-158,共7页
In order to solve the problem of the reliability of slope engineering due to complex uncertainties, the Monte Carlo simulation method is adopted. Based on the characteristics of sparse grid, an interpolation algorithm... In order to solve the problem of the reliability of slope engineering due to complex uncertainties, the Monte Carlo simulation method is adopted. Based on the characteristics of sparse grid, an interpolation algorithm, which can be applied to high dimensional problems, is introduced. A surrogate model of high dimensional implicit function is established, which makes Monte Carlo method more adaptable. Finally, a reliability analysis method is proposed to evaluate the reliability of the slope engineering, and is applied in the Sau Mau Ping slope project in Hong Kong. The reliability analysis method has great theoretical and practical significance for engineering quality evaluation and natural disaster assessment. 展开更多
关键词 SLOPE reliability ANALYSIS high-dimension sparse GRID MONTE Carlo simulation
下载PDF
Biorthogonal Wavelet Based Algebraic Multigrid Preconditioners for Large Sparse Linear Systems 被引量:1
18
作者 A. Padmanabha Reddy Nagendrappa M. Bujurke 《Applied Mathematics》 2011年第11期1378-1381,共4页
In this article algebraic multigrid as preconditioners are designed, with biorthogonal wavelets, as intergrid operators for the Krylov subspace iterative methods. Construction of hierarchy of matrices in algebraic mul... In this article algebraic multigrid as preconditioners are designed, with biorthogonal wavelets, as intergrid operators for the Krylov subspace iterative methods. Construction of hierarchy of matrices in algebraic multigrid context is based on lowpass filter version of Wavelet Transform. The robustness and efficiency of this new approach is tested by applying it to large sparse, unsymmetric and ill-conditioned matrices from Tim Davis collection of sparse matrices. Proposed preconditioners have potential in reducing cputime, operator complexity and storage space of algebraic multigrid V-cycle and meet the desired accuracy of solution compared with that of orthogonal wavelets. 展开更多
关键词 ALGEBRAIC MULTIGRID PRECONDITIONER Wavelet Transform sparse matrix Krylov SUBSPACE ITERATIVE Methods
下载PDF
A Fast LDL-factorization Approach for Large Sparse Positive Definite System and Its Application to One-to-one Marketing Optimization Computation
19
作者 Min Wu Bei He Jin-Hua She 《International Journal of Automation and computing》 EI 2007年第1期88-94,共7页
LDL-factorization is an efficient way of solving Ax = b for a large symmetric positive definite sparse matrix A. This paper presents a new method that further improves the efficiency of LDL-factorization. It is based ... LDL-factorization is an efficient way of solving Ax = b for a large symmetric positive definite sparse matrix A. This paper presents a new method that further improves the efficiency of LDL-factorization. It is based on the theory of elimination trees for the factorization factor. It breaks the computations involved in LDL-factorization down into two stages: 1) the pattern of nonzero entries of the factor is predicted, and 2) the numerical values of the nonzero entries of the factor are computed. The factor is stored using the form of an elimination tree so as to reduce memory usage and avoid unnecessary numerical operations. The calculation results for some typical numerical examples demonstrate that this method provides a significantly higher calculation efficiency for the one-to-one marketing optimization algorithm. 展开更多
关键词 sparse matrix factorization elimination tree structure prediction one-to-one marketing optimization.
下载PDF
Efficient Distributed Estimation of High-dimensional Sparse Precision Matrix for Transelliptical Graphical Models
20
作者 Guan Peng WANG Heng Jian CUI 《Acta Mathematica Sinica,English Series》 SCIE CSCD 2021年第5期689-706,共18页
In this paper,distributed estimation of high-dimensional sparse precision matrix is proposed based on the debiased D-trace loss penalized lasso and the hard threshold method when samples are distributed into different... In this paper,distributed estimation of high-dimensional sparse precision matrix is proposed based on the debiased D-trace loss penalized lasso and the hard threshold method when samples are distributed into different machines for transelliptical graphical models.At a certain level of sparseness,this method not only achieves the correct selection of non-zero elements of sparse precision matrix,but the error rate can be comparable to the estimator in a non-distributed setting.The numerical results further prove that the proposed distributed method is more effective than the usual average method. 展开更多
关键词 Distributed estimator sparse precision matrix high-dimensional hard threshold efficient communication
原文传递
上一页 1 2 58 下一页 到第
使用帮助 返回顶部