Principal Component Analysis (PCA) is a widely used technique for data analysis and dimensionality reduction, but its sensitivity to feature scale and outliers limits its applicability. Robust Principal Component Anal...Principal Component Analysis (PCA) is a widely used technique for data analysis and dimensionality reduction, but its sensitivity to feature scale and outliers limits its applicability. Robust Principal Component Analysis (RPCA) addresses these limitations by decomposing data into a low-rank matrix capturing the underlying structure and a sparse matrix identifying outliers, enhancing robustness against noise and outliers. This paper introduces a novel RPCA variant, Robust PCA Integrating Sparse and Low-rank Priors (RPCA-SL). Each prior targets a specific aspect of the data’s underlying structure and their combination allows for a more nuanced and accurate separation of the main data components from outliers and noise. Then RPCA-SL is solved by employing a proximal gradient algorithm for improved anomaly detection and data decomposition. Experimental results on simulation and real data demonstrate significant advancements.展开更多
Latent factor(LF) models are highly effective in extracting useful knowledge from High-Dimensional and Sparse(HiDS) matrices which are commonly seen in various industrial applications. An LF model usually adopts itera...Latent factor(LF) models are highly effective in extracting useful knowledge from High-Dimensional and Sparse(HiDS) matrices which are commonly seen in various industrial applications. An LF model usually adopts iterative optimizers,which may consume many iterations to achieve a local optima,resulting in considerable time cost. Hence, determining how to accelerate the training process for LF models has become a significant issue. To address this, this work proposes a randomized latent factor(RLF) model. It incorporates the principle of randomized learning techniques from neural networks into the LF analysis of HiDS matrices, thereby greatly alleviating computational burden. It also extends a standard learning process for randomized neural networks in context of LF analysis to make the resulting model represent an HiDS matrix correctly.Experimental results on three HiDS matrices from industrial applications demonstrate that compared with state-of-the-art LF models, RLF is able to achieve significantly higher computational efficiency and comparable prediction accuracy for missing data.I provides an important alternative approach to LF analysis of HiDS matrices, which is especially desired for industrial applications demanding highly efficient models.展开更多
High-dimensional and sparse(HiDS)matrices commonly arise in various industrial applications,e.g.,recommender systems(RSs),social networks,and wireless sensor networks.Since they contain rich information,how to accurat...High-dimensional and sparse(HiDS)matrices commonly arise in various industrial applications,e.g.,recommender systems(RSs),social networks,and wireless sensor networks.Since they contain rich information,how to accurately represent them is of great significance.A latent factor(LF)model is one of the most popular and successful ways to address this issue.Current LF models mostly adopt L2-norm-oriented Loss to represent an HiDS matrix,i.e.,they sum the errors between observed data and predicted ones with L2-norm.Yet L2-norm is sensitive to outlier data.Unfortunately,outlier data usually exist in such matrices.For example,an HiDS matrix from RSs commonly contains many outlier ratings due to some heedless/malicious users.To address this issue,this work proposes a smooth L1-norm-oriented latent factor(SL-LF)model.Its main idea is to adopt smooth L1-norm rather than L2-norm to form its Loss,making it have both strong robustness and high accuracy in predicting the missing data of an HiDS matrix.Experimental results on eight HiDS matrices generated by industrial applications verify that the proposed SL-LF model not only is robust to the outlier data but also has significantly higher prediction accuracy than state-of-the-art models when they are used to predict the missing data of HiDS matrices.展开更多
In the previous papers, a high performance sparse static solver with two-level unrolling based on a cell-sparse storage scheme was reported. Although the solver reaches quite a high efficiency for a big percentage of ...In the previous papers, a high performance sparse static solver with two-level unrolling based on a cell-sparse storage scheme was reported. Although the solver reaches quite a high efficiency for a big percentage of finite element analysis benchmark tests, the MFLOPS (million floating operations per second) of LDL^T factorization of benchmark tests vary on a Dell Pentium IV 850 MHz machine from 100 to 456 depending on the average size of the super-equations, i.e., on the average depth of unrolling. In this paper, a new sparse static solver with two-level unrolling that employs the concept of master-equations and searches for an appropriate depths of unrolling is proposed. The new solver provides higher MFLOPS for LDL^T factorization of benchmark tests, and therefore speeds up the solution process.展开更多
LDL-factorization is an efficient way of solving Ax = b for a large symmetric positive definite sparse matrix A. This paper presents a new method that further improves the efficiency of LDL-factorization. It is based ...LDL-factorization is an efficient way of solving Ax = b for a large symmetric positive definite sparse matrix A. This paper presents a new method that further improves the efficiency of LDL-factorization. It is based on the theory of elimination trees for the factorization factor. It breaks the computations involved in LDL-factorization down into two stages: 1) the pattern of nonzero entries of the factor is predicted, and 2) the numerical values of the nonzero entries of the factor are computed. The factor is stored using the form of an elimination tree so as to reduce memory usage and avoid unnecessary numerical operations. The calculation results for some typical numerical examples demonstrate that this method provides a significantly higher calculation efficiency for the one-to-one marketing optimization algorithm.展开更多
A fast precise integration method is developed for the time integral of the hyperbolic heat conduction problem. The wave nature of heat transfer is used to analyze the structure of the matrix exponential, leading to t...A fast precise integration method is developed for the time integral of the hyperbolic heat conduction problem. The wave nature of heat transfer is used to analyze the structure of the matrix exponential, leading to the fact that the matrix exponential is sparse. The presented method employs the sparsity of the matrix exponential to improve the original precise integration method. The merits are that the proposed method is suitable for large hyperbolic heat equations and inherits the accuracy of the original version and the good computational efficiency, which are verified by two numerical examples.展开更多
Text classification of low resource language is always a trivial and challenging problem.This paper discusses the process of Urdu news classification and Urdu documents similarity.Urdu is one of the most famous spoken...Text classification of low resource language is always a trivial and challenging problem.This paper discusses the process of Urdu news classification and Urdu documents similarity.Urdu is one of the most famous spoken languages in Asia.The implementation of computational methodologies for text classification has increased over time.However,Urdu language has not much experimented with research,it does not have readily available datasets,which turn out to be the primary reason behind limited research and applying the latest methodologies to the Urdu.To overcome these obstacles,a mediumsized dataset having six categories is collected from authentic Pakistani news sources.Urdu is a rich but complex language.Text processing can be challenging for Urdu due to its complex features as compared to other languages.Term frequency-inverse document frequency(TFIDF)based term weighting scheme for extracting features,chi-2 for selecting essential features,and Linear discriminant analysis(LDA)for dimensionality reduction have been used.TFIDF matrix and cosine similarity measure have been used to identify similar documents in a collection and find the semantic meaning of words in a document FastText model has been applied.The training-test split evaluation methodology is used for this experimentation,which includes 70%for training data and 30%for testing data.State-of-the-art machine learning and deep dense neural network approaches for Urdu news classification have been used.Finally,we trained Multinomial Naïve Bayes,XGBoost,Bagging,and Deep dense neural network.Bagging and deep dense neural network outperformed the other algorithms.The experimental results show that deep dense achieves 92.0%mean f1 score,and Bagging 95.0%f1 score.展开更多
Daubechies interval cally weakly singular Fredholm kind. Utilizing the orthogonality equation is reduced into a linear wavelet is used to solve nurneriintegral equations of the second of the wavelet basis, the integra...Daubechies interval cally weakly singular Fredholm kind. Utilizing the orthogonality equation is reduced into a linear wavelet is used to solve nurneriintegral equations of the second of the wavelet basis, the integral system of equations. The vanishing moments of the wavelet make the wavelet coefficient matrices sparse, while the continuity of the derivative functions of basis overcomes naturally the singular problem of the integral solution. The uniform convergence of the approximate solution by the wavelet method is proved and the error bound is given. Finally, numerical example is presented to show the application of the wavelet method.展开更多
It was proposed that a robust and efficient parallelizable preconditioner for solving general sparse linear systems of equations, in which the use of sparse approximate inverse (AINV) techniques in a multi-level block...It was proposed that a robust and efficient parallelizable preconditioner for solving general sparse linear systems of equations, in which the use of sparse approximate inverse (AINV) techniques in a multi-level block ILU (BILUM) preconditioner were investigated. The resulting preconditioner retains robustness of BILUM preconditioner and has two advantages over the standard BILUM preconditioner: the ability to control sparsity and increased parallelism. Numerical experiments are used to show the effectiveness and efficiency of the new preconditioner.展开更多
Hyperspectral imagery generally contains a very large amount of data due to hundreds of spectral bands.Band selection is often applied firstly to reduce computational cost and facilitate subsequent tasks such as land-...Hyperspectral imagery generally contains a very large amount of data due to hundreds of spectral bands.Band selection is often applied firstly to reduce computational cost and facilitate subsequent tasks such as land-cover classification and higher level image analysis.In this paper,we propose a new band selection algorithm using sparse nonnegative matrix factorization (sparse NMF).Though acting as a clustering method for band selection,sparse NMF need not consider the distance metric between different spectral bands,which is often the key step for most common clustering-based band selection methods.By imposing sparsity on the coefficient matrix,the bands' clustering assignments can be easily indicated through the largest entry in each column of the matrix.Experimental results showed that sparse NMF provides considerable insight into the clustering-based band selection problem and the selected bands are good for land-cover classification.展开更多
Working memory plays an important role in human cognition. This study investigated how working memory was encoded by the power of multichannel local field potentials (LFPs) based on sparse non negative matrix factor...Working memory plays an important role in human cognition. This study investigated how working memory was encoded by the power of multichannel local field potentials (LFPs) based on sparse non negative matrix factorization (SNMF). SNMF was used to extract features from LFPs recorded from the prefrontal cortex of four SpragueDawley rats during a memory task in a Y maze, with 10 trials for each rat. Then the powerincreased LFP components were selected as working memoryrelated features and the other components were removed. After that, the inverse operation of SNMF was used to study the encoding of working memory in the time frequency domain. We demonstrated that theta and gamma power increased significantly during the working memory task. The results suggested that postsynaptic activity was simulated well by the sparse activity model. The theta and gamma bands were meaningful for encoding working memory.展开更多
It is understood that the sparse signal recovery with a standard compressive sensing(CS) strategy requires the measurement matrix known as a priori. The measurement matrix is, however, often perturbed in a practical...It is understood that the sparse signal recovery with a standard compressive sensing(CS) strategy requires the measurement matrix known as a priori. The measurement matrix is, however, often perturbed in a practical application.In order to handle such a case, an optimization problem by exploiting the sparsity characteristics of both the perturbations and signals is formulated. An algorithm named as the sparse perturbation signal recovery algorithm(SPSRA) is then proposed to solve the formulated optimization problem. The analytical results show that our SPSRA can simultaneously recover the signal and perturbation vectors by an alternative iteration way, while the convergence of the SPSRA is also analytically given and guaranteed. Moreover, the support patterns of the sparse signal and structured perturbation shown are the same and can be exploited to improve the estimation accuracy and reduce the computation complexity of the algorithm. The numerical simulation results verify the effectiveness of analytical ones.展开更多
In this paper,distributed estimation of high-dimensional sparse precision matrix is proposed based on the debiased D-trace loss penalized lasso and the hard threshold method when samples are distributed into different...In this paper,distributed estimation of high-dimensional sparse precision matrix is proposed based on the debiased D-trace loss penalized lasso and the hard threshold method when samples are distributed into different machines for transelliptical graphical models.At a certain level of sparseness,this method not only achieves the correct selection of non-zero elements of sparse precision matrix,but the error rate can be comparable to the estimator in a non-distributed setting.The numerical results further prove that the proposed distributed method is more effective than the usual average method.展开更多
In solving application problems, many largesscale nonlinear systems of equations result in sparse Jacobian matrices. Such nonlinear systems are called sparse nonlinear systems. The irregularity of the locations of non...In solving application problems, many largesscale nonlinear systems of equations result in sparse Jacobian matrices. Such nonlinear systems are called sparse nonlinear systems. The irregularity of the locations of nonzero elements of a general sparse matrix makes it very difficult to generally map sparse matrix computations to multiprocessors for parallel processing in a well balanced manner. To overcome this difficulty, we define a new storage scheme for general sparse matrices in this paper. With the new storage scheme, we develop parallel algorithms to solve large-scale general sparse systems of equations by interval Newton/Generalized bisection methods which reliably find all numerical solutions within a given domain.In Section 1, we provide an introduction to the addressed problem and the interval Newton's methods. In Section 2, some currently used storage schemes for sparse sys-terns are reviewed. In Section 3, new index schemes to store general sparse matrices are reported. In Section 4, we present a parallel algorithm to evaluate a general sparse Jarobian matrix. In Section 5, we present a parallel algorithm to solve the correspond-ing interval linear 8ystem by the all-row preconditioned scheme. Conclusions and future work are discussed in Section 6.展开更多
In this paper, we establish a class of sparse update algorithm based on matrix triangular factorizations for solving a system of sparse equations. The local Q-superlinear convergence of the algorithm is proved without...In this paper, we establish a class of sparse update algorithm based on matrix triangular factorizations for solving a system of sparse equations. The local Q-superlinear convergence of the algorithm is proved without introducing an m-step refactorization. We compare the numerical results of the new algorithm with those of the known algorithms, The comparison implies that the new algorithm is satisfactory.展开更多
Compressed sensing(CS) provides a new approach to acquire data as a sampling technique and makes it sure that a sparse signal can be reconstructed from few measurements. The construction of compressed matrixes is a ...Compressed sensing(CS) provides a new approach to acquire data as a sampling technique and makes it sure that a sparse signal can be reconstructed from few measurements. The construction of compressed matrixes is a central problem in compressed sensing. This paper provides a construction of deterministic CS matrixes, which are also disjunct and inclusive matrixes, from singular pseudo-symplectic space over finite fields of characteristic 2. Our construction is superior to De Vore's construction under some conditions and can be used to reconstruct sparse signals through an efficient algorithm.展开更多
Warehouse scheduling efficiency has to do with the length-height ratio of location (LHRL) to some extent, which hasn't been well investigated until now. In this paper a mathematic model is built by analyzing the re...Warehouse scheduling efficiency has to do with the length-height ratio of location (LHRL) to some extent, which hasn't been well investigated until now. In this paper a mathematic model is built by analyzing the relation between the travel time of the stacker and LHRL. Mean- while, warehouse scheduling strategy is studied combining with the project on the automatic production line of an enterprise, and a warehouse scheduling strategy is pro- posed based on index of quality (IoQ) parameters. Besides, the process of getting the value of IoQ is also simplified with the idea of sparse matrix. Finally, the IoQ scheduling strategy is compared with random strategy and First Come First Out strategy in different LHRLs. The simulation results show that the IoQ scheduling strategy not only improves the quality of the product effectively, but also improves the efficiency of the scheduling substantially.展开更多
Two optimal orthogonalization processes are devised toorthogonalize,possibly approximately,the columns of a very large and possiblysparse matrix A∈C^(n×k).Algorithmically the aim is,at each step,to optimallydecr...Two optimal orthogonalization processes are devised toorthogonalize,possibly approximately,the columns of a very large and possiblysparse matrix A∈C^(n×k).Algorithmically the aim is,at each step,to optimallydecrease nonorthogonality of all the columns of A.One process relies on using translated small rank corrections.Another is a polynomial orthogonalization process forperforming the Löwdin orthogonalization.The steps rely on using iterative methods combined,preferably,with preconditioning which can have a dramatic effect on how fast thenonorthogonality decreases.The speed of orthogonalization depends on howbunched the singular values of A are,modulo the number of steps taken.These methods put the steps of the Gram-Schmidt orthogonalizationprocess into perspective regardingtheir(lack of)optimality.The constructions are entirely operatortheoretic and can be extended to infinite dimensional Hilbert spaces.展开更多
A fully 3D OSEM reconstruction method for positron emission tomography (PET) based on symmetries and sparse matrix technique is described. Great savings in both storage space and computation time were achieved by ex...A fully 3D OSEM reconstruction method for positron emission tomography (PET) based on symmetries and sparse matrix technique is described. Great savings in both storage space and computation time were achieved by exploiting the symmetries of scanner and sparseness of the system matrix. More reduction of storage requirement was obtained by introducing the approximation of system matrix. Iteration-filter was performed to restrict image noise in reconstruction. Performances of simulation data and phantom data got from Micro-PET (Type: Epuls-166) demonstrated that similar image quality was achieved using the approximation of the system matrix.展开更多
An r-adaptive boundary element method(BEM) based on unbalanced Haar wavelets(UBHWs) is developed for solving 2D Laplace equations in which the Galerkin method is used to discretize boundary integral equations.To a...An r-adaptive boundary element method(BEM) based on unbalanced Haar wavelets(UBHWs) is developed for solving 2D Laplace equations in which the Galerkin method is used to discretize boundary integral equations.To accelerate the convergence of the adaptive process,the grading function and optimization iteration methods are successively employed.Numerical results of two representative examples clearly show that,first,the combined iteration method can accelerate the convergence;moreover,by using UBHWs,the memory usage for storing the system matrix of the r-adaptive BEM can be reduced by a factor of about 100 for problems with more than 15 thousand unknowns,while the error and convergence property of the original BEM can be retained.展开更多
文摘Principal Component Analysis (PCA) is a widely used technique for data analysis and dimensionality reduction, but its sensitivity to feature scale and outliers limits its applicability. Robust Principal Component Analysis (RPCA) addresses these limitations by decomposing data into a low-rank matrix capturing the underlying structure and a sparse matrix identifying outliers, enhancing robustness against noise and outliers. This paper introduces a novel RPCA variant, Robust PCA Integrating Sparse and Low-rank Priors (RPCA-SL). Each prior targets a specific aspect of the data’s underlying structure and their combination allows for a more nuanced and accurate separation of the main data components from outliers and noise. Then RPCA-SL is solved by employing a proximal gradient algorithm for improved anomaly detection and data decomposition. Experimental results on simulation and real data demonstrate significant advancements.
基金supported in part by the National Natural Science Foundation of China (6177249391646114)+1 种基金Chongqing research program of technology innovation and application (cstc2017rgzn-zdyfX0020)in part by the Pioneer Hundred Talents Program of Chinese Academy of Sciences
文摘Latent factor(LF) models are highly effective in extracting useful knowledge from High-Dimensional and Sparse(HiDS) matrices which are commonly seen in various industrial applications. An LF model usually adopts iterative optimizers,which may consume many iterations to achieve a local optima,resulting in considerable time cost. Hence, determining how to accelerate the training process for LF models has become a significant issue. To address this, this work proposes a randomized latent factor(RLF) model. It incorporates the principle of randomized learning techniques from neural networks into the LF analysis of HiDS matrices, thereby greatly alleviating computational burden. It also extends a standard learning process for randomized neural networks in context of LF analysis to make the resulting model represent an HiDS matrix correctly.Experimental results on three HiDS matrices from industrial applications demonstrate that compared with state-of-the-art LF models, RLF is able to achieve significantly higher computational efficiency and comparable prediction accuracy for missing data.I provides an important alternative approach to LF analysis of HiDS matrices, which is especially desired for industrial applications demanding highly efficient models.
基金supported in part by the National Natural Science Foundation of China(61702475,61772493,61902370,62002337)in part by the Natural Science Foundation of Chongqing,China(cstc2019jcyj-msxmX0578,cstc2019jcyjjqX0013)+1 种基金in part by the Chinese Academy of Sciences“Light of West China”Program,in part by the Pioneer Hundred Talents Program of Chinese Academy of Sciencesby Technology Innovation and Application Development Project of Chongqing,China(cstc2019jscx-fxydX0027)。
文摘High-dimensional and sparse(HiDS)matrices commonly arise in various industrial applications,e.g.,recommender systems(RSs),social networks,and wireless sensor networks.Since they contain rich information,how to accurately represent them is of great significance.A latent factor(LF)model is one of the most popular and successful ways to address this issue.Current LF models mostly adopt L2-norm-oriented Loss to represent an HiDS matrix,i.e.,they sum the errors between observed data and predicted ones with L2-norm.Yet L2-norm is sensitive to outlier data.Unfortunately,outlier data usually exist in such matrices.For example,an HiDS matrix from RSs commonly contains many outlier ratings due to some heedless/malicious users.To address this issue,this work proposes a smooth L1-norm-oriented latent factor(SL-LF)model.Its main idea is to adopt smooth L1-norm rather than L2-norm to form its Loss,making it have both strong robustness and high accuracy in predicting the missing data of an HiDS matrix.Experimental results on eight HiDS matrices generated by industrial applications verify that the proposed SL-LF model not only is robust to the outlier data but also has significantly higher prediction accuracy than state-of-the-art models when they are used to predict the missing data of HiDS matrices.
基金Project supported by the Research Fund for the Doctoral Program of Higher Education (No.20030001112).
文摘In the previous papers, a high performance sparse static solver with two-level unrolling based on a cell-sparse storage scheme was reported. Although the solver reaches quite a high efficiency for a big percentage of finite element analysis benchmark tests, the MFLOPS (million floating operations per second) of LDL^T factorization of benchmark tests vary on a Dell Pentium IV 850 MHz machine from 100 to 456 depending on the average size of the super-equations, i.e., on the average depth of unrolling. In this paper, a new sparse static solver with two-level unrolling that employs the concept of master-equations and searches for an appropriate depths of unrolling is proposed. The new solver provides higher MFLOPS for LDL^T factorization of benchmark tests, and therefore speeds up the solution process.
基金This work was supported in part by the National Natural Science Foundation of PRC (No.60425310)the Teaching and Research Award Program for Outstanding Young Teachers in Higher Education Institutions of MOE,PRC.
文摘LDL-factorization is an efficient way of solving Ax = b for a large symmetric positive definite sparse matrix A. This paper presents a new method that further improves the efficiency of LDL-factorization. It is based on the theory of elimination trees for the factorization factor. It breaks the computations involved in LDL-factorization down into two stages: 1) the pattern of nonzero entries of the factor is predicted, and 2) the numerical values of the nonzero entries of the factor are computed. The factor is stored using the form of an elimination tree so as to reduce memory usage and avoid unnecessary numerical operations. The calculation results for some typical numerical examples demonstrate that this method provides a significantly higher calculation efficiency for the one-to-one marketing optimization algorithm.
基金supported by the National Natural Science Foundation of China (Nos. 10902020 and 10721062)
文摘A fast precise integration method is developed for the time integral of the hyperbolic heat conduction problem. The wave nature of heat transfer is used to analyze the structure of the matrix exponential, leading to the fact that the matrix exponential is sparse. The presented method employs the sparsity of the matrix exponential to improve the original precise integration method. The merits are that the proposed method is suitable for large hyperbolic heat equations and inherits the accuracy of the original version and the good computational efficiency, which are verified by two numerical examples.
文摘Text classification of low resource language is always a trivial and challenging problem.This paper discusses the process of Urdu news classification and Urdu documents similarity.Urdu is one of the most famous spoken languages in Asia.The implementation of computational methodologies for text classification has increased over time.However,Urdu language has not much experimented with research,it does not have readily available datasets,which turn out to be the primary reason behind limited research and applying the latest methodologies to the Urdu.To overcome these obstacles,a mediumsized dataset having six categories is collected from authentic Pakistani news sources.Urdu is a rich but complex language.Text processing can be challenging for Urdu due to its complex features as compared to other languages.Term frequency-inverse document frequency(TFIDF)based term weighting scheme for extracting features,chi-2 for selecting essential features,and Linear discriminant analysis(LDA)for dimensionality reduction have been used.TFIDF matrix and cosine similarity measure have been used to identify similar documents in a collection and find the semantic meaning of words in a document FastText model has been applied.The training-test split evaluation methodology is used for this experimentation,which includes 70%for training data and 30%for testing data.State-of-the-art machine learning and deep dense neural network approaches for Urdu news classification have been used.Finally,we trained Multinomial Naïve Bayes,XGBoost,Bagging,and Deep dense neural network.Bagging and deep dense neural network outperformed the other algorithms.The experimental results show that deep dense achieves 92.0%mean f1 score,and Bagging 95.0%f1 score.
基金Supported by the National Natural Science Foundation of China (60572048)the Natural Science Foundation of Guangdong Province(054006621)
文摘Daubechies interval cally weakly singular Fredholm kind. Utilizing the orthogonality equation is reduced into a linear wavelet is used to solve nurneriintegral equations of the second of the wavelet basis, the integral system of equations. The vanishing moments of the wavelet make the wavelet coefficient matrices sparse, while the continuity of the derivative functions of basis overcomes naturally the singular problem of the integral solution. The uniform convergence of the approximate solution by the wavelet method is proved and the error bound is given. Finally, numerical example is presented to show the application of the wavelet method.
文摘It was proposed that a robust and efficient parallelizable preconditioner for solving general sparse linear systems of equations, in which the use of sparse approximate inverse (AINV) techniques in a multi-level block ILU (BILUM) preconditioner were investigated. The resulting preconditioner retains robustness of BILUM preconditioner and has two advantages over the standard BILUM preconditioner: the ability to control sparsity and increased parallelism. Numerical experiments are used to show the effectiveness and efficiency of the new preconditioner.
基金Project (No.60872071) supported by the National Natural Science Foundation of China
文摘Hyperspectral imagery generally contains a very large amount of data due to hundreds of spectral bands.Band selection is often applied firstly to reduce computational cost and facilitate subsequent tasks such as land-cover classification and higher level image analysis.In this paper,we propose a new band selection algorithm using sparse nonnegative matrix factorization (sparse NMF).Though acting as a clustering method for band selection,sparse NMF need not consider the distance metric between different spectral bands,which is often the key step for most common clustering-based band selection methods.By imposing sparsity on the coefficient matrix,the bands' clustering assignments can be easily indicated through the largest entry in each column of the matrix.Experimental results showed that sparse NMF provides considerable insight into the clustering-based band selection problem and the selected bands are good for land-cover classification.
基金supported by the National Natural Science Foundation of China (61074131 and 91132722)the Doctoral Fund of the Ministry of Education of China (21101202110007)
文摘Working memory plays an important role in human cognition. This study investigated how working memory was encoded by the power of multichannel local field potentials (LFPs) based on sparse non negative matrix factorization (SNMF). SNMF was used to extract features from LFPs recorded from the prefrontal cortex of four SpragueDawley rats during a memory task in a Y maze, with 10 trials for each rat. Then the powerincreased LFP components were selected as working memoryrelated features and the other components were removed. After that, the inverse operation of SNMF was used to study the encoding of working memory in the time frequency domain. We demonstrated that theta and gamma power increased significantly during the working memory task. The results suggested that postsynaptic activity was simulated well by the sparse activity model. The theta and gamma bands were meaningful for encoding working memory.
基金supported by the National Natural Science Foundation of China(61171127)
文摘It is understood that the sparse signal recovery with a standard compressive sensing(CS) strategy requires the measurement matrix known as a priori. The measurement matrix is, however, often perturbed in a practical application.In order to handle such a case, an optimization problem by exploiting the sparsity characteristics of both the perturbations and signals is formulated. An algorithm named as the sparse perturbation signal recovery algorithm(SPSRA) is then proposed to solve the formulated optimization problem. The analytical results show that our SPSRA can simultaneously recover the signal and perturbation vectors by an alternative iteration way, while the convergence of the SPSRA is also analytically given and guaranteed. Moreover, the support patterns of the sparse signal and structured perturbation shown are the same and can be exploited to improve the estimation accuracy and reduce the computation complexity of the algorithm. The numerical simulation results verify the effectiveness of analytical ones.
基金partly supported by National Natural Science Foundation of China(Grant Nos.12031016,11971324,11471223)Foundations of Science and Technology Innovation Service Capacity Building,Interdisciplinary Construction of Bioinformatics and Statistics,and Academy for Multidisciplinary Studies,Capital Normal University,Beijing。
文摘In this paper,distributed estimation of high-dimensional sparse precision matrix is proposed based on the debiased D-trace loss penalized lasso and the hard threshold method when samples are distributed into different machines for transelliptical graphical models.At a certain level of sparseness,this method not only achieves the correct selection of non-zero elements of sparse precision matrix,but the error rate can be comparable to the estimator in a non-distributed setting.The numerical results further prove that the proposed distributed method is more effective than the usual average method.
文摘In solving application problems, many largesscale nonlinear systems of equations result in sparse Jacobian matrices. Such nonlinear systems are called sparse nonlinear systems. The irregularity of the locations of nonzero elements of a general sparse matrix makes it very difficult to generally map sparse matrix computations to multiprocessors for parallel processing in a well balanced manner. To overcome this difficulty, we define a new storage scheme for general sparse matrices in this paper. With the new storage scheme, we develop parallel algorithms to solve large-scale general sparse systems of equations by interval Newton/Generalized bisection methods which reliably find all numerical solutions within a given domain.In Section 1, we provide an introduction to the addressed problem and the interval Newton's methods. In Section 2, some currently used storage schemes for sparse sys-terns are reviewed. In Section 3, new index schemes to store general sparse matrices are reported. In Section 4, we present a parallel algorithm to evaluate a general sparse Jarobian matrix. In Section 5, we present a parallel algorithm to solve the correspond-ing interval linear 8ystem by the all-row preconditioned scheme. Conclusions and future work are discussed in Section 6.
文摘In this paper, we establish a class of sparse update algorithm based on matrix triangular factorizations for solving a system of sparse equations. The local Q-superlinear convergence of the algorithm is proved without introducing an m-step refactorization. We compare the numerical results of the new algorithm with those of the known algorithms, The comparison implies that the new algorithm is satisfactory.
基金supported by the National Natural Science Foundation of China (61179026)
文摘Compressed sensing(CS) provides a new approach to acquire data as a sampling technique and makes it sure that a sparse signal can be reconstructed from few measurements. The construction of compressed matrixes is a central problem in compressed sensing. This paper provides a construction of deterministic CS matrixes, which are also disjunct and inclusive matrixes, from singular pseudo-symplectic space over finite fields of characteristic 2. Our construction is superior to De Vore's construction under some conditions and can be used to reconstruct sparse signals through an efficient algorithm.
基金supported by the National Natural Science Foundation of China (Grant Nos. 61074032 and 61273040)the Project of Science and Technology Commission of Shanghai Municipality (Grant No. 10JC1405000)the Shanghai Rising-Star Program (Grant No. 12QA1401100)
文摘Warehouse scheduling efficiency has to do with the length-height ratio of location (LHRL) to some extent, which hasn't been well investigated until now. In this paper a mathematic model is built by analyzing the relation between the travel time of the stacker and LHRL. Mean- while, warehouse scheduling strategy is studied combining with the project on the automatic production line of an enterprise, and a warehouse scheduling strategy is pro- posed based on index of quality (IoQ) parameters. Besides, the process of getting the value of IoQ is also simplified with the idea of sparse matrix. Finally, the IoQ scheduling strategy is compared with random strategy and First Come First Out strategy in different LHRLs. The simulation results show that the IoQ scheduling strategy not only improves the quality of the product effectively, but also improves the efficiency of the scheduling substantially.
基金supported by the Academy of Finland(Grant No.288641)。
文摘Two optimal orthogonalization processes are devised toorthogonalize,possibly approximately,the columns of a very large and possiblysparse matrix A∈C^(n×k).Algorithmically the aim is,at each step,to optimallydecrease nonorthogonality of all the columns of A.One process relies on using translated small rank corrections.Another is a polynomial orthogonalization process forperforming the Löwdin orthogonalization.The steps rely on using iterative methods combined,preferably,with preconditioning which can have a dramatic effect on how fast thenonorthogonality decreases.The speed of orthogonalization depends on howbunched the singular values of A are,modulo the number of steps taken.These methods put the steps of the Gram-Schmidt orthogonalizationprocess into perspective regardingtheir(lack of)optimality.The constructions are entirely operatortheoretic and can be extended to infinite dimensional Hilbert spaces.
基金Supported by National High Technology Research and Development Program of China (2006AA020803)National Basic Research Program of China (2006CB705700)
文摘A fully 3D OSEM reconstruction method for positron emission tomography (PET) based on symmetries and sparse matrix technique is described. Great savings in both storage space and computation time were achieved by exploiting the symmetries of scanner and sparseness of the system matrix. More reduction of storage requirement was obtained by introducing the approximation of system matrix. Iteration-filter was performed to restrict image noise in reconstruction. Performances of simulation data and phantom data got from Micro-PET (Type: Epuls-166) demonstrated that similar image quality was achieved using the approximation of the system matrix.
基金Supported by the National Natural Science Foundation of China (10674109)the Doctorate Foundation of Northwestern Polytechnical University (CX200601)
文摘An r-adaptive boundary element method(BEM) based on unbalanced Haar wavelets(UBHWs) is developed for solving 2D Laplace equations in which the Galerkin method is used to discretize boundary integral equations.To accelerate the convergence of the adaptive process,the grading function and optimization iteration methods are successively employed.Numerical results of two representative examples clearly show that,first,the combined iteration method can accelerate the convergence;moreover,by using UBHWs,the memory usage for storing the system matrix of the r-adaptive BEM can be reduced by a factor of about 100 for problems with more than 15 thousand unknowns,while the error and convergence property of the original BEM can be retained.