Principal Component Analysis (PCA) is a widely used technique for data analysis and dimensionality reduction, but its sensitivity to feature scale and outliers limits its applicability. Robust Principal Component Anal...Principal Component Analysis (PCA) is a widely used technique for data analysis and dimensionality reduction, but its sensitivity to feature scale and outliers limits its applicability. Robust Principal Component Analysis (RPCA) addresses these limitations by decomposing data into a low-rank matrix capturing the underlying structure and a sparse matrix identifying outliers, enhancing robustness against noise and outliers. This paper introduces a novel RPCA variant, Robust PCA Integrating Sparse and Low-rank Priors (RPCA-SL). Each prior targets a specific aspect of the data’s underlying structure and their combination allows for a more nuanced and accurate separation of the main data components from outliers and noise. Then RPCA-SL is solved by employing a proximal gradient algorithm for improved anomaly detection and data decomposition. Experimental results on simulation and real data demonstrate significant advancements.展开更多
To address the seismic face stability challenges encountered in urban and subsea tunnel construction,an efficient probabilistic analysis framework for shield tunnel faces under seismic conditions is proposed.Based on ...To address the seismic face stability challenges encountered in urban and subsea tunnel construction,an efficient probabilistic analysis framework for shield tunnel faces under seismic conditions is proposed.Based on the upper-bound theory of limit analysis,an improved three-dimensional discrete deterministic mechanism,accounting for the heterogeneous nature of soil media,is formulated to evaluate seismic face stability.The metamodel of failure probabilistic assessments for seismic tunnel faces is constructed by integrating the sparse polynomial chaos expansion method(SPCE)with the modified pseudo-dynamic approach(MPD).The improved deterministic model is validated by comparing with published literature and numerical simulations results,and the SPCE-MPD metamodel is examined with the traditional MCS method.Based on the SPCE-MPD metamodels,the seismic effects on face failure probability and reliability index are presented and the global sensitivity analysis(GSA)is involved to reflect the influence order of seismic action parameters.Finally,the proposed approach is tested to be effective by a engineering case of the Chengdu outer ring tunnel.The results show that higher uncertainty of seismic response on face stability should be noticed in areas with intense earthquakes and variation of seismic wave velocity has the most profound influence on tunnel face stability.展开更多
Multi-view Subspace Clustering (MVSC) emerges as an advanced clustering method, designed to integrate diverse views to uncover a common subspace, enhancing the accuracy and robustness of clustering results. The signif...Multi-view Subspace Clustering (MVSC) emerges as an advanced clustering method, designed to integrate diverse views to uncover a common subspace, enhancing the accuracy and robustness of clustering results. The significance of low-rank prior in MVSC is emphasized, highlighting its role in capturing the global data structure across views for improved performance. However, it faces challenges with outlier sensitivity due to its reliance on the Frobenius norm for error measurement. Addressing this, our paper proposes a Low-Rank Multi-view Subspace Clustering Based on Sparse Regularization (LMVSC- Sparse) approach. Sparse regularization helps in selecting the most relevant features or views for clustering while ignoring irrelevant or noisy ones. This leads to a more efficient and effective representation of the data, improving the clustering accuracy and robustness, especially in the presence of outliers or noisy data. By incorporating sparse regularization, LMVSC-Sparse can effectively handle outlier sensitivity, which is a common challenge in traditional MVSC methods relying solely on low-rank priors. Then Alternating Direction Method of Multipliers (ADMM) algorithm is employed to solve the proposed optimization problems. Our comprehensive experiments demonstrate the efficiency and effectiveness of LMVSC-Sparse, offering a robust alternative to traditional MVSC methods.展开更多
(Multichannel)Singular spectrum analysis is considered as one of the most effective methods for seismic incoherent noise suppression.It utilizes the low-rank feature of seismic signal and regards the noise suppression...(Multichannel)Singular spectrum analysis is considered as one of the most effective methods for seismic incoherent noise suppression.It utilizes the low-rank feature of seismic signal and regards the noise suppression as a low-rank reconstruction problem.However,in some cases the seismic geophones receive some erratic disturbances and the amplitudes are dramatically larger than other receivers.The presence of this kind of noise,called erratic noise,makes singular spectrum analysis(SSA)reconstruction unstable and has undesirable effects on the final results.We robustify the low-rank reconstruction of seismic data by a reweighted damped SSA(RD-SSA)method.It incorporates the damped SSA,an improved version of SSA,into a reweighted framework.The damping operator is used to weaken the artificial disturbance introduced by the low-rank projection of both erratic and random noise.The central idea of the RD-SSA method is to iteratively approximate the observed data with the quadratic norm for the first iteration and the Tukeys bisquare norm for the rest iterations.The RD-SSA method can suppress seismic incoherent noise and keep the reconstruction process robust to the erratic disturbance.The feasibility of RD-SSA is validated via both synthetic and field data examples.展开更多
Traditional unsupervised seismic facies analysis techniques need to assume that seismic data obey mixed Gaussian distribution.However,fi eld seismic data may not meet this condition,thereby leading to wrong classifi c...Traditional unsupervised seismic facies analysis techniques need to assume that seismic data obey mixed Gaussian distribution.However,fi eld seismic data may not meet this condition,thereby leading to wrong classifi cation in the application of this technology.This paper introduces a spectral clustering technique for unsupervised seismic facies analysis.This algorithm is based on on the idea of a graph to cluster the data.Its kem is that seismic data are regarded as points in space,points can be connected with the edge and construct to graphs.When the graphs are divided,the weights of the edges between the different subgraphs are as low as possible,whereas the weights of the inner edges of the subgraph should be as high as possible.That has high computational complexity and entails large memory consumption for spectral clustering algorithm.To solve the problem this paper introduces the idea of sparse representation into spectral clustering.Through the selection of a small number of local sparse representation points,the spectral clustering matrix of all sample points is approximately represented to reduce the cost of spectral clustering operation.Verifi cation of physical model and fi eld data shows that the proposed approach can obtain more accurate seismic facies classification results without considering the data meet any hypothesis.The computing efficiency of this new method is better than that of the conventional spectral clustering method,thereby meeting the application needs of fi eld seismic data.展开更多
Many methods based on deep learning have achieved amazing results in image sentiment analysis.However,these existing methods usually pursue high accuracy,ignoring the effect on model training efficiency.Considering th...Many methods based on deep learning have achieved amazing results in image sentiment analysis.However,these existing methods usually pursue high accuracy,ignoring the effect on model training efficiency.Considering that when faced with large-scale sentiment analysis tasks,the high accuracy rate often requires long experimental time.In view of the weakness,a method that can greatly improve experimental efficiency with only small fluctuations in model accuracy is proposed,and singular value decomposition(SVD)is used to find the sparse feature of the image,which are sparse vectors with strong discriminativeness and effectively reduce redundant information;The authors propose the Fast Dictionary Learning algorithm(FDL),which can combine neural network with sparse representation.This method is based on K-Singular Value Decomposition,and through iteration,it can effectively reduce the calculation time and greatly improve the training efficiency in the case of small fluctuation of accuracy.Moreover,the effectiveness of the proposed method is evaluated on the FER2013 dataset.By adding singular value decomposition,the accuracy of the test suite increased by 0.53%,and the total experiment time was shortened by 8.2%;Fast Dictionary Learning shortened the total experiment time by 36.3%.展开更多
In order to solve the problem of the reliability of slope engineering due to complex uncertainties, the Monte Carlo simulation method is adopted. Based on the characteristics of sparse grid, an interpolation algorithm...In order to solve the problem of the reliability of slope engineering due to complex uncertainties, the Monte Carlo simulation method is adopted. Based on the characteristics of sparse grid, an interpolation algorithm, which can be applied to high dimensional problems, is introduced. A surrogate model of high dimensional implicit function is established, which makes Monte Carlo method more adaptable. Finally, a reliability analysis method is proposed to evaluate the reliability of the slope engineering, and is applied in the Sau Mau Ping slope project in Hong Kong. The reliability analysis method has great theoretical and practical significance for engineering quality evaluation and natural disaster assessment.展开更多
The spaceborne synthetic aperture radar(SAR)sparse flight 3-D imaging technology through multiple observations of the cross-track direction is designed to form the cross-track equivalent aperture,and achieve the third...The spaceborne synthetic aperture radar(SAR)sparse flight 3-D imaging technology through multiple observations of the cross-track direction is designed to form the cross-track equivalent aperture,and achieve the third dimensionality recognition.In this paper,combined with the actual triple star orbits,a sparse flight spaceborne SAR 3-D imaging method based on the sparse spectrum of interferometry and the principal component analysis(PCA)is presented.Firstly,interferometric processing is utilized to reach an effective sparse representation of radar images in the frequency domain.Secondly,as a method with simple principle and fast calculation,the PCA is introduced to extract the main features of the image spectrum according to its principal characteristics.Finally,the 3-D image can be obtained by inverse transformation of the reconstructed spectrum by the PCA.The simulation results of 4.84 km equivalent cross-track aperture and corresponding 1.78 m cross-track resolution verify the effective suppression of this method on high-frequency sidelobe noise introduced by sparse flight with a sparsity of 49%and random noise introduced by the receiver.Meanwhile,due to the influence of orbit distribution of the actual triple star orbits,the simulation results of the sparse flight with the 7-bit Barker code orbits are given as a comparison and reference to illuminate the significance of orbit distribution for this reconstruction results.This method has prospects for sparse flight 3-D imaging in high latitude areas for its short revisit period.展开更多
In the previous papers, a high performance sparse static solver with two-level unrolling based on a cell-sparse storage scheme was reported. Although the solver reaches quite a high efficiency for a big percentage of ...In the previous papers, a high performance sparse static solver with two-level unrolling based on a cell-sparse storage scheme was reported. Although the solver reaches quite a high efficiency for a big percentage of finite element analysis benchmark tests, the MFLOPS (million floating operations per second) of LDL^T factorization of benchmark tests vary on a Dell Pentium IV 850 MHz machine from 100 to 456 depending on the average size of the super-equations, i.e., on the average depth of unrolling. In this paper, a new sparse static solver with two-level unrolling that employs the concept of master-equations and searches for an appropriate depths of unrolling is proposed. The new solver provides higher MFLOPS for LDL^T factorization of benchmark tests, and therefore speeds up the solution process.展开更多
Signals can be sampled by compressive sensing theory with a much less rate than those by traditional Nyquist sampling theorem,and reconstructed with high probability,only when signals are sparse in the time domain or ...Signals can be sampled by compressive sensing theory with a much less rate than those by traditional Nyquist sampling theorem,and reconstructed with high probability,only when signals are sparse in the time domain or a transform domain.Most signals are not sparse in real world,but can be expressed in sparse form by some kind of sparse transformation.Commonly used sparse transformations will lose some information,because their transform bases are generally fixed.In this paper,we use principal component analysis for data reduction,and select new variable with low dimension and linearly correlated to the original variable,instead of the original variable with high dimension,thus the useful data of the original signals can be included in the sparse signals after dimensionality reduction with maximize portability.Therefore,the loss of data can be reduced as much as possible,and the efficiency of signal reconstruction can be improved.Finally,the composite material plate is used for the experimental verification.The experimental result shows that the sparse representation of signals based on principal component analysis can reduce signal distortion and improve signal reconstruction efficiency.展开更多
Surrogate models are usually used to perform global sensitivity analysis (GSA) by avoiding a large ensemble of deterministic simulations of the Monte Carlo method to provide a reliable estimate of GSA indices. Howev...Surrogate models are usually used to perform global sensitivity analysis (GSA) by avoiding a large ensemble of deterministic simulations of the Monte Carlo method to provide a reliable estimate of GSA indices. However, most surrogate models such as polynomial chaos (PC) expansions suffer from the curse of dimensionality due to the high-dimensional input space. Thus, sparse surrogate models have been proposed to alleviate the curse of dimensionality. In this paper, three techniques of sparse reconstruc- tion are used to construct sparse PC expansions that are easily applicable to computing variance-based sensitivity indices (Sobol indices). These are orthogonal matching pursuit (OMP), spectral projected gradient for L1 minimization (SPGL1), and Bayesian compressive sensing with Laplace priors. By computing Sobol indices for several benchmark response models including the Sobol function, the Morris function, and the Sod shock tube problem, effective implementations of high-dimensional sparse surrogate construction are exhibited for GSA.展开更多
The advent of the COVID-19 pandemic has adversely affected the entire world and has put forth high demand for techniques that remotely manage crowd-related tasks.Video surveillance and crowd management using video ana...The advent of the COVID-19 pandemic has adversely affected the entire world and has put forth high demand for techniques that remotely manage crowd-related tasks.Video surveillance and crowd management using video analysis techniques have significantly impacted today’s research,and numerous applications have been developed in this domain.This research proposed an anomaly detection technique applied to Umrah videos in Kaaba during the COVID-19 pandemic through sparse crowd analysis.Managing theKaaba rituals is crucial since the crowd gathers from around the world and requires proper analysis during these days of the pandemic.The Umrah videos are analyzed,and a system is devised that can track and monitor the crowd flow in Kaaba.The crowd in these videos is sparse due to the pandemic,and we have developed a technique to track the maximum crowd flow and detect any object(person)moving in the direction unlikely of the major flow.We have detected abnormal movement by creating the histograms for the vertical and horizontal flows and applying thresholds to identify the non-majority flow.Our algorithm aims to analyze the crowd through video surveillance and timely detect any abnormal activity tomaintain a smooth crowd flowinKaaba during the pandemic.展开更多
A novel supervised dimensionality reduction algorithm, named discriminant embedding by sparse representation and nonparametric discriminant analysis(DESN), was proposed for face recognition. Within the framework of DE...A novel supervised dimensionality reduction algorithm, named discriminant embedding by sparse representation and nonparametric discriminant analysis(DESN), was proposed for face recognition. Within the framework of DESN, the sparse local scatter and multi-class nonparametric between-class scatter were exploited for within-class compactness and between-class separability description, respectively. These descriptions, inspired by sparse representation theory and nonparametric technique, are more discriminative in dealing with complex-distributed data. Furthermore, DESN seeks for the optimal projection matrix by simultaneously maximizing the nonparametric between-class scatter and minimizing the sparse local scatter. The use of Fisher discriminant analysis further boosts the discriminating power of DESN. The proposed DESN was applied to data visualization and face recognition tasks, and was tested extensively on the Wine, ORL, Yale and Extended Yale B databases. Experimental results show that DESN is helpful to visualize the structure of high-dimensional data sets, and the average face recognition rate of DESN is about 9.4%, higher than that of other algorithms.展开更多
The paper aims to discuss three interesting issues of statistical inferences for a common risk ratio (RR) in sparse meta-analysis data. Firstly, the conventional log-risk ratio estimator encounters a number of problem...The paper aims to discuss three interesting issues of statistical inferences for a common risk ratio (RR) in sparse meta-analysis data. Firstly, the conventional log-risk ratio estimator encounters a number of problems when the number of events in the experimental or control group is zero in sparse data of a 2 × 2 table. The adjusted log-risk ratio estimator with the continuity correction points based upon the minimum Bayes risk with respect to the uniform prior density over (0, 1) and the Euclidean loss function is proposed. Secondly, the interest is to find the optimal weights of the pooled estimate that minimize the mean square error (MSE) of subject to the constraint on where , , . Finally, the performance of this minimum MSE weighted estimator adjusted with various values of points is investigated to compare with other popular estimators, such as the Mantel-Haenszel (MH) estimator and the weighted least squares (WLS) estimator (also equivalently known as the inverse-variance weighted estimator) in senses of point estimation and hypothesis testing via simulation studies. The results of estimation illustrate that regardless of the true values of RR, the MH estimator achieves the best performance with the smallest MSE when the study size is rather large and the sample sizes within each study are small. The MSE of WLS estimator and the proposed-weight estimator adjusted by , or , or are close together and they are the best when the sample sizes are moderate to large (and) while the study size is rather small.展开更多
Least squares projection twin support vector machine(LSPTSVM)has faster computing speed than classical least squares support vector machine(LSSVM).However,LSPTSVM is sensitive to outliers and its solution lacks sparsi...Least squares projection twin support vector machine(LSPTSVM)has faster computing speed than classical least squares support vector machine(LSSVM).However,LSPTSVM is sensitive to outliers and its solution lacks sparsity.Therefore,it is difficult for LSPTSVM to process large-scale datasets with outliers.In this paper,we propose a robust LSPTSVM model(called R-LSPTSVM)by applying truncated least squares loss function.The robustness of R-LSPTSVM is proved from a weighted perspective.Furthermore,we obtain the sparse solution of R-LSPTSVM by using the pivoting Cholesky factorization method in primal space.Finally,the sparse R-LSPTSVM algorithm(SR-LSPTSVM)is proposed.Experimental results show that SR-LSPTSVM is insensitive to outliers and can deal with large-scale datasets fastly.展开更多
Low-Rank and Sparse Representation(LRSR)method has gained popularity in Hyperspectral Image(HSI)processing.However,existing LRSR models rarely exploited spectral-spatial classification of HSI.In this paper,we proposed...Low-Rank and Sparse Representation(LRSR)method has gained popularity in Hyperspectral Image(HSI)processing.However,existing LRSR models rarely exploited spectral-spatial classification of HSI.In this paper,we proposed a novel Low-Rank and Sparse Representation with Adaptive Neighborhood Regularization(LRSR-ANR)method for HSI classification.In the proposed method,we first represent the hyperspectral data via LRSR since it combines both sparsity and low-rankness to maintain global and local data structures simultaneously.The LRSR is optimized by using a mixed Gauss-Seidel and Jacobian Alternating Direction Method of Multipliers(M-ADMM),which converges faster than ADMM.Then to incorporate the spatial information,an ANR scheme is designed by combining Euclidean and Cosine distance metrics to reduce the mixed pixels within a neighborhood.Lastly,the predicted labels are determined by jointly considering the homogeneous pixels in the classification rule of the minimum reconstruction error.Experimental results based on three popular hyperspectral images demonstrate that the proposed method outperforms other related methods in terms of classification accuracy and generalization performance.展开更多
Polynomial Chaos Expansion(PCE)has gained significant popularity among engineers across various engineering disciplines for uncertainty analysis.However,traditional PCE suffers from two major drawbacks.First,the ortho...Polynomial Chaos Expansion(PCE)has gained significant popularity among engineers across various engineering disciplines for uncertainty analysis.However,traditional PCE suffers from two major drawbacks.First,the orthogonality of polynomial basis functions holds only for independent input variables,limiting the model’s ability to propagate uncertainty in dependent variables.Second,PCE encounters the"curse of dimensionality"due to the high computational cost of training the model with numerous polynomial coefficients.In practical manufacturing,compressor blades are subject to machining precision limitations,leading to deviations from their ideal geometric shapes.These deviations require a large number of geometric parameters to describe,and exhibit significant correlations.To efficiently quantify the impact of high-dimensional dependent geometric deviations on the aerodynamic performance of compressor blades,this paper firstly introduces a novel approach called Data-driven Sparse PCE(DSPCE).The proposed method addresses the aforementioned challenges by employing a decorrelation algorithm to directly create multivariate basis functions,accommodating both independent and dependent random variables.Furthermore,the method utilizes an iterative Diffeomorphic Modulation under Observable Response Preserving Homotopy regression algorithm to solve the unknown coefficients,achieving model sparsity while maintaining fitting accuracy.Then,the study investigates the simultaneous effects of seven dependent geometric deviations on the aerodynamics of a high subsonic compressor cascade by using the DSPCE method proposed and sensitivity analysis of covariance.The joint distribution of the dependent geometric deviations is determined using Quantile-Quantile plots and normal copula functions based on finite measurement data.The results demonstrate that the correlations between geometric deviations significantly impact the variance of aerodynamic performance and the flow field.Therefore,it is crucial to consider these correlations for accurately assessing the aerodynamic uncertainty.展开更多
The method of recovering a low-rank matrix with an unknown fraction whose entries are arbitrarily corrupted is known as the robust principal component analysis (RPCA). This RPCA problem, under some conditions, can b...The method of recovering a low-rank matrix with an unknown fraction whose entries are arbitrarily corrupted is known as the robust principal component analysis (RPCA). This RPCA problem, under some conditions, can be exactly solved via convex optimization by minimizing a combination of the nuclear norm and the 11 norm. In this paper, an algorithm based on the Douglas-Rachford splitting method is proposed for solving the RPCA problem. First, the convex optimization problem is solved by canceling the constraint of the variables, and ~hen the proximity operators of the objective function are computed alternately. The new algorithm can exactly recover the low-rank and sparse components simultaneously, and it is proved to be convergent. Numerical simulations demonstrate the practical utility of the proposed algorithm.展开更多
At present, although the human speech separation has achieved fruitful results, it is not ideal for the separation of singing and accompaniment. Based on low-rank and sparse optimization theory, in this paper, we prop...At present, although the human speech separation has achieved fruitful results, it is not ideal for the separation of singing and accompaniment. Based on low-rank and sparse optimization theory, in this paper, we propose a new singing voice separation algorithm called Low-rank, Sparse Representation with pre-learned dictionaries and side Information (LSRi). The algorithm incorporates both the vocal and instrumental spectrograms as sparse matrix and low-rank matrix, meanwhile combines pre-learning dictionary and the reconstructed voice spectrogram form the annotation. Evaluations on the iKala dataset show that the proposed methods are effective and efficient for singing voice separation.展开更多
The Gabor and S transforms are frequently used in time-frequency decomposition methods. Constrained by the uncertainty principle, both transforms produce low-resolution time-frequency decomposition results in the time...The Gabor and S transforms are frequently used in time-frequency decomposition methods. Constrained by the uncertainty principle, both transforms produce low-resolution time-frequency decomposition results in the time and frequency domains. To improve the resolution of the time-frequency decomposition results, we use the instantaneous frequency distribution function(IFDF) to express the seismic signal. When the instantaneous frequencies of the nonstationary signal satisfy the requirements of the uncertainty principle, the support of IFDF is just the support of the amplitude ridges in the signal obtained using the short-time Fourier transform. Based on this feature, we propose a new iteration algorithm to achieve the sparse time-frequency decomposition of the signal. The iteration algorithm uses the support of the amplitude ridges of the residual signal obtained with the short-time Fourier transform to update the time-frequency components of the signal. The summation of the updated time-frequency components in each iteration is the result of the sparse timefrequency decomposition. Numerical examples show that the proposed method improves the resolution of the time-frequency decomposition results and the accuracy of the analysis of the nonstationary signal. We also use the proposed method to attenuate the ground roll of field seismic data with good results.展开更多
文摘Principal Component Analysis (PCA) is a widely used technique for data analysis and dimensionality reduction, but its sensitivity to feature scale and outliers limits its applicability. Robust Principal Component Analysis (RPCA) addresses these limitations by decomposing data into a low-rank matrix capturing the underlying structure and a sparse matrix identifying outliers, enhancing robustness against noise and outliers. This paper introduces a novel RPCA variant, Robust PCA Integrating Sparse and Low-rank Priors (RPCA-SL). Each prior targets a specific aspect of the data’s underlying structure and their combination allows for a more nuanced and accurate separation of the main data components from outliers and noise. Then RPCA-SL is solved by employing a proximal gradient algorithm for improved anomaly detection and data decomposition. Experimental results on simulation and real data demonstrate significant advancements.
基金Project([2018]3010)supported by the Guizhou Provincial Science and Technology Major Project,China。
文摘To address the seismic face stability challenges encountered in urban and subsea tunnel construction,an efficient probabilistic analysis framework for shield tunnel faces under seismic conditions is proposed.Based on the upper-bound theory of limit analysis,an improved three-dimensional discrete deterministic mechanism,accounting for the heterogeneous nature of soil media,is formulated to evaluate seismic face stability.The metamodel of failure probabilistic assessments for seismic tunnel faces is constructed by integrating the sparse polynomial chaos expansion method(SPCE)with the modified pseudo-dynamic approach(MPD).The improved deterministic model is validated by comparing with published literature and numerical simulations results,and the SPCE-MPD metamodel is examined with the traditional MCS method.Based on the SPCE-MPD metamodels,the seismic effects on face failure probability and reliability index are presented and the global sensitivity analysis(GSA)is involved to reflect the influence order of seismic action parameters.Finally,the proposed approach is tested to be effective by a engineering case of the Chengdu outer ring tunnel.The results show that higher uncertainty of seismic response on face stability should be noticed in areas with intense earthquakes and variation of seismic wave velocity has the most profound influence on tunnel face stability.
文摘Multi-view Subspace Clustering (MVSC) emerges as an advanced clustering method, designed to integrate diverse views to uncover a common subspace, enhancing the accuracy and robustness of clustering results. The significance of low-rank prior in MVSC is emphasized, highlighting its role in capturing the global data structure across views for improved performance. However, it faces challenges with outlier sensitivity due to its reliance on the Frobenius norm for error measurement. Addressing this, our paper proposes a Low-Rank Multi-view Subspace Clustering Based on Sparse Regularization (LMVSC- Sparse) approach. Sparse regularization helps in selecting the most relevant features or views for clustering while ignoring irrelevant or noisy ones. This leads to a more efficient and effective representation of the data, improving the clustering accuracy and robustness, especially in the presence of outliers or noisy data. By incorporating sparse regularization, LMVSC-Sparse can effectively handle outlier sensitivity, which is a common challenge in traditional MVSC methods relying solely on low-rank priors. Then Alternating Direction Method of Multipliers (ADMM) algorithm is employed to solve the proposed optimization problems. Our comprehensive experiments demonstrate the efficiency and effectiveness of LMVSC-Sparse, offering a robust alternative to traditional MVSC methods.
基金supported by the National Natural Science Foundation of China under grant no.42374133the Beijing Nova Program under grant no.2022056+1 种基金the Fundamental Research Funds for the Central Universities under grant no.2462020YXZZ006the Young Elite Scientists Sponsorship Program by CAST(YESS)under grant no.2018QNRC001。
文摘(Multichannel)Singular spectrum analysis is considered as one of the most effective methods for seismic incoherent noise suppression.It utilizes the low-rank feature of seismic signal and regards the noise suppression as a low-rank reconstruction problem.However,in some cases the seismic geophones receive some erratic disturbances and the amplitudes are dramatically larger than other receivers.The presence of this kind of noise,called erratic noise,makes singular spectrum analysis(SSA)reconstruction unstable and has undesirable effects on the final results.We robustify the low-rank reconstruction of seismic data by a reweighted damped SSA(RD-SSA)method.It incorporates the damped SSA,an improved version of SSA,into a reweighted framework.The damping operator is used to weaken the artificial disturbance introduced by the low-rank projection of both erratic and random noise.The central idea of the RD-SSA method is to iteratively approximate the observed data with the quadratic norm for the first iteration and the Tukeys bisquare norm for the rest iterations.The RD-SSA method can suppress seismic incoherent noise and keep the reconstruction process robust to the erratic disturbance.The feasibility of RD-SSA is validated via both synthetic and field data examples.
基金This work was supported by National Natural Science Foundation of China(Nos.U1562218,41604107,and 41804126).
文摘Traditional unsupervised seismic facies analysis techniques need to assume that seismic data obey mixed Gaussian distribution.However,fi eld seismic data may not meet this condition,thereby leading to wrong classifi cation in the application of this technology.This paper introduces a spectral clustering technique for unsupervised seismic facies analysis.This algorithm is based on on the idea of a graph to cluster the data.Its kem is that seismic data are regarded as points in space,points can be connected with the edge and construct to graphs.When the graphs are divided,the weights of the edges between the different subgraphs are as low as possible,whereas the weights of the inner edges of the subgraph should be as high as possible.That has high computational complexity and entails large memory consumption for spectral clustering algorithm.To solve the problem this paper introduces the idea of sparse representation into spectral clustering.Through the selection of a small number of local sparse representation points,the spectral clustering matrix of all sample points is approximately represented to reduce the cost of spectral clustering operation.Verifi cation of physical model and fi eld data shows that the proposed approach can obtain more accurate seismic facies classification results without considering the data meet any hypothesis.The computing efficiency of this new method is better than that of the conventional spectral clustering method,thereby meeting the application needs of fi eld seismic data.
基金supported by the National Natural Science Foundation of China(No.61801440)the High‐quality and Cutting‐edge Disciplines Construction Project for Universities in Beijing(Internet Information,Communication University of China),State Key Laboratory of Media Convergence and Communication(Communication University of China)the Fundamental Research Funds for the Central Universities(CUC2019B069).
文摘Many methods based on deep learning have achieved amazing results in image sentiment analysis.However,these existing methods usually pursue high accuracy,ignoring the effect on model training efficiency.Considering that when faced with large-scale sentiment analysis tasks,the high accuracy rate often requires long experimental time.In view of the weakness,a method that can greatly improve experimental efficiency with only small fluctuations in model accuracy is proposed,and singular value decomposition(SVD)is used to find the sparse feature of the image,which are sparse vectors with strong discriminativeness and effectively reduce redundant information;The authors propose the Fast Dictionary Learning algorithm(FDL),which can combine neural network with sparse representation.This method is based on K-Singular Value Decomposition,and through iteration,it can effectively reduce the calculation time and greatly improve the training efficiency in the case of small fluctuation of accuracy.Moreover,the effectiveness of the proposed method is evaluated on the FER2013 dataset.By adding singular value decomposition,the accuracy of the test suite increased by 0.53%,and the total experiment time was shortened by 8.2%;Fast Dictionary Learning shortened the total experiment time by 36.3%.
基金Supported by projects of China Ocean Research Mineral Resources R&D Association(COMRA)Special Foundation(DY135-R2-1-01,DY135-46)the Province/Jilin University Co-Construction Project-Funds for New Materials(SXGJSF2017-3)
文摘In order to solve the problem of the reliability of slope engineering due to complex uncertainties, the Monte Carlo simulation method is adopted. Based on the characteristics of sparse grid, an interpolation algorithm, which can be applied to high dimensional problems, is introduced. A surrogate model of high dimensional implicit function is established, which makes Monte Carlo method more adaptable. Finally, a reliability analysis method is proposed to evaluate the reliability of the slope engineering, and is applied in the Sau Mau Ping slope project in Hong Kong. The reliability analysis method has great theoretical and practical significance for engineering quality evaluation and natural disaster assessment.
基金This work was supported by the General Design Department,China Academy of Space Technology(10377).
文摘The spaceborne synthetic aperture radar(SAR)sparse flight 3-D imaging technology through multiple observations of the cross-track direction is designed to form the cross-track equivalent aperture,and achieve the third dimensionality recognition.In this paper,combined with the actual triple star orbits,a sparse flight spaceborne SAR 3-D imaging method based on the sparse spectrum of interferometry and the principal component analysis(PCA)is presented.Firstly,interferometric processing is utilized to reach an effective sparse representation of radar images in the frequency domain.Secondly,as a method with simple principle and fast calculation,the PCA is introduced to extract the main features of the image spectrum according to its principal characteristics.Finally,the 3-D image can be obtained by inverse transformation of the reconstructed spectrum by the PCA.The simulation results of 4.84 km equivalent cross-track aperture and corresponding 1.78 m cross-track resolution verify the effective suppression of this method on high-frequency sidelobe noise introduced by sparse flight with a sparsity of 49%and random noise introduced by the receiver.Meanwhile,due to the influence of orbit distribution of the actual triple star orbits,the simulation results of the sparse flight with the 7-bit Barker code orbits are given as a comparison and reference to illuminate the significance of orbit distribution for this reconstruction results.This method has prospects for sparse flight 3-D imaging in high latitude areas for its short revisit period.
基金Project supported by the Research Fund for the Doctoral Program of Higher Education (No.20030001112).
文摘In the previous papers, a high performance sparse static solver with two-level unrolling based on a cell-sparse storage scheme was reported. Although the solver reaches quite a high efficiency for a big percentage of finite element analysis benchmark tests, the MFLOPS (million floating operations per second) of LDL^T factorization of benchmark tests vary on a Dell Pentium IV 850 MHz machine from 100 to 456 depending on the average size of the super-equations, i.e., on the average depth of unrolling. In this paper, a new sparse static solver with two-level unrolling that employs the concept of master-equations and searches for an appropriate depths of unrolling is proposed. The new solver provides higher MFLOPS for LDL^T factorization of benchmark tests, and therefore speeds up the solution process.
基金supported by the National Natural Science Foundation of China(Nos.51405241,61672290)the Jiangsu Government Scholarship for Overseas Studies and the PAPD Fund
文摘Signals can be sampled by compressive sensing theory with a much less rate than those by traditional Nyquist sampling theorem,and reconstructed with high probability,only when signals are sparse in the time domain or a transform domain.Most signals are not sparse in real world,but can be expressed in sparse form by some kind of sparse transformation.Commonly used sparse transformations will lose some information,because their transform bases are generally fixed.In this paper,we use principal component analysis for data reduction,and select new variable with low dimension and linearly correlated to the original variable,instead of the original variable with high dimension,thus the useful data of the original signals can be included in the sparse signals after dimensionality reduction with maximize portability.Therefore,the loss of data can be reduced as much as possible,and the efficiency of signal reconstruction can be improved.Finally,the composite material plate is used for the experimental verification.The experimental result shows that the sparse representation of signals based on principal component analysis can reduce signal distortion and improve signal reconstruction efficiency.
基金Project supported by the National Natural Science Foundation of China(Nos.11172049 and11472060)the Science Foundation of China Academy of Engineering Physics(Nos.2015B0201037and 2013A0101004)
文摘Surrogate models are usually used to perform global sensitivity analysis (GSA) by avoiding a large ensemble of deterministic simulations of the Monte Carlo method to provide a reliable estimate of GSA indices. However, most surrogate models such as polynomial chaos (PC) expansions suffer from the curse of dimensionality due to the high-dimensional input space. Thus, sparse surrogate models have been proposed to alleviate the curse of dimensionality. In this paper, three techniques of sparse reconstruc- tion are used to construct sparse PC expansions that are easily applicable to computing variance-based sensitivity indices (Sobol indices). These are orthogonal matching pursuit (OMP), spectral projected gradient for L1 minimization (SPGL1), and Bayesian compressive sensing with Laplace priors. By computing Sobol indices for several benchmark response models including the Sobol function, the Morris function, and the Sod shock tube problem, effective implementations of high-dimensional sparse surrogate construction are exhibited for GSA.
基金The authors extend their appreciation to the Deputyship for Research and Innovation,Ministry of Education in Saudi Arabia for funding this research work through the Project Number QURDO001Project title:Intelligent Real-Time Crowd Monitoring System Using Unmanned Aerial Vehicle(UAV)Video and Global Positioning Systems(GPS)Data。
文摘The advent of the COVID-19 pandemic has adversely affected the entire world and has put forth high demand for techniques that remotely manage crowd-related tasks.Video surveillance and crowd management using video analysis techniques have significantly impacted today’s research,and numerous applications have been developed in this domain.This research proposed an anomaly detection technique applied to Umrah videos in Kaaba during the COVID-19 pandemic through sparse crowd analysis.Managing theKaaba rituals is crucial since the crowd gathers from around the world and requires proper analysis during these days of the pandemic.The Umrah videos are analyzed,and a system is devised that can track and monitor the crowd flow in Kaaba.The crowd in these videos is sparse due to the pandemic,and we have developed a technique to track the maximum crowd flow and detect any object(person)moving in the direction unlikely of the major flow.We have detected abnormal movement by creating the histograms for the vertical and horizontal flows and applying thresholds to identify the non-majority flow.Our algorithm aims to analyze the crowd through video surveillance and timely detect any abnormal activity tomaintain a smooth crowd flowinKaaba during the pandemic.
基金Project(40901216)supported by the National Natural Science Foundation of China
文摘A novel supervised dimensionality reduction algorithm, named discriminant embedding by sparse representation and nonparametric discriminant analysis(DESN), was proposed for face recognition. Within the framework of DESN, the sparse local scatter and multi-class nonparametric between-class scatter were exploited for within-class compactness and between-class separability description, respectively. These descriptions, inspired by sparse representation theory and nonparametric technique, are more discriminative in dealing with complex-distributed data. Furthermore, DESN seeks for the optimal projection matrix by simultaneously maximizing the nonparametric between-class scatter and minimizing the sparse local scatter. The use of Fisher discriminant analysis further boosts the discriminating power of DESN. The proposed DESN was applied to data visualization and face recognition tasks, and was tested extensively on the Wine, ORL, Yale and Extended Yale B databases. Experimental results show that DESN is helpful to visualize the structure of high-dimensional data sets, and the average face recognition rate of DESN is about 9.4%, higher than that of other algorithms.
文摘The paper aims to discuss three interesting issues of statistical inferences for a common risk ratio (RR) in sparse meta-analysis data. Firstly, the conventional log-risk ratio estimator encounters a number of problems when the number of events in the experimental or control group is zero in sparse data of a 2 × 2 table. The adjusted log-risk ratio estimator with the continuity correction points based upon the minimum Bayes risk with respect to the uniform prior density over (0, 1) and the Euclidean loss function is proposed. Secondly, the interest is to find the optimal weights of the pooled estimate that minimize the mean square error (MSE) of subject to the constraint on where , , . Finally, the performance of this minimum MSE weighted estimator adjusted with various values of points is investigated to compare with other popular estimators, such as the Mantel-Haenszel (MH) estimator and the weighted least squares (WLS) estimator (also equivalently known as the inverse-variance weighted estimator) in senses of point estimation and hypothesis testing via simulation studies. The results of estimation illustrate that regardless of the true values of RR, the MH estimator achieves the best performance with the smallest MSE when the study size is rather large and the sample sizes within each study are small. The MSE of WLS estimator and the proposed-weight estimator adjusted by , or , or are close together and they are the best when the sample sizes are moderate to large (and) while the study size is rather small.
基金supported by the National Natural Science Foundation of China(6177202062202433+4 种基金621723716227242262036010)the Natural Science Foundation of Henan Province(22100002)the Postdoctoral Research Grant in Henan Province(202103111)。
文摘Least squares projection twin support vector machine(LSPTSVM)has faster computing speed than classical least squares support vector machine(LSSVM).However,LSPTSVM is sensitive to outliers and its solution lacks sparsity.Therefore,it is difficult for LSPTSVM to process large-scale datasets with outliers.In this paper,we propose a robust LSPTSVM model(called R-LSPTSVM)by applying truncated least squares loss function.The robustness of R-LSPTSVM is proved from a weighted perspective.Furthermore,we obtain the sparse solution of R-LSPTSVM by using the pivoting Cholesky factorization method in primal space.Finally,the sparse R-LSPTSVM algorithm(SR-LSPTSVM)is proposed.Experimental results show that SR-LSPTSVM is insensitive to outliers and can deal with large-scale datasets fastly.
基金National Natural Foundation of China(No.41971279)Fundamental Research Funds of the Central Universities(No.B200202012)。
文摘Low-Rank and Sparse Representation(LRSR)method has gained popularity in Hyperspectral Image(HSI)processing.However,existing LRSR models rarely exploited spectral-spatial classification of HSI.In this paper,we proposed a novel Low-Rank and Sparse Representation with Adaptive Neighborhood Regularization(LRSR-ANR)method for HSI classification.In the proposed method,we first represent the hyperspectral data via LRSR since it combines both sparsity and low-rankness to maintain global and local data structures simultaneously.The LRSR is optimized by using a mixed Gauss-Seidel and Jacobian Alternating Direction Method of Multipliers(M-ADMM),which converges faster than ADMM.Then to incorporate the spatial information,an ANR scheme is designed by combining Euclidean and Cosine distance metrics to reduce the mixed pixels within a neighborhood.Lastly,the predicted labels are determined by jointly considering the homogeneous pixels in the classification rule of the minimum reconstruction error.Experimental results based on three popular hyperspectral images demonstrate that the proposed method outperforms other related methods in terms of classification accuracy and generalization performance.
基金the National Science and Technology Major Project of China(No.J2019-I-0011)the Innovation Foundation for Doctor Dissertation of Northwestern Polytechnical University,China(No.CX2023057)for supporting the research work.
文摘Polynomial Chaos Expansion(PCE)has gained significant popularity among engineers across various engineering disciplines for uncertainty analysis.However,traditional PCE suffers from two major drawbacks.First,the orthogonality of polynomial basis functions holds only for independent input variables,limiting the model’s ability to propagate uncertainty in dependent variables.Second,PCE encounters the"curse of dimensionality"due to the high computational cost of training the model with numerous polynomial coefficients.In practical manufacturing,compressor blades are subject to machining precision limitations,leading to deviations from their ideal geometric shapes.These deviations require a large number of geometric parameters to describe,and exhibit significant correlations.To efficiently quantify the impact of high-dimensional dependent geometric deviations on the aerodynamic performance of compressor blades,this paper firstly introduces a novel approach called Data-driven Sparse PCE(DSPCE).The proposed method addresses the aforementioned challenges by employing a decorrelation algorithm to directly create multivariate basis functions,accommodating both independent and dependent random variables.Furthermore,the method utilizes an iterative Diffeomorphic Modulation under Observable Response Preserving Homotopy regression algorithm to solve the unknown coefficients,achieving model sparsity while maintaining fitting accuracy.Then,the study investigates the simultaneous effects of seven dependent geometric deviations on the aerodynamics of a high subsonic compressor cascade by using the DSPCE method proposed and sensitivity analysis of covariance.The joint distribution of the dependent geometric deviations is determined using Quantile-Quantile plots and normal copula functions based on finite measurement data.The results demonstrate that the correlations between geometric deviations significantly impact the variance of aerodynamic performance and the flow field.Therefore,it is crucial to consider these correlations for accurately assessing the aerodynamic uncertainty.
基金supported by the National Natural Science Foundation of China(No.61271014)the Specialized Research Fund for the Doctoral Program of Higher Education(No.20124301110003)the Graduated Students Innovation Fund of Hunan Province(No.CX2012B238)
文摘The method of recovering a low-rank matrix with an unknown fraction whose entries are arbitrarily corrupted is known as the robust principal component analysis (RPCA). This RPCA problem, under some conditions, can be exactly solved via convex optimization by minimizing a combination of the nuclear norm and the 11 norm. In this paper, an algorithm based on the Douglas-Rachford splitting method is proposed for solving the RPCA problem. First, the convex optimization problem is solved by canceling the constraint of the variables, and ~hen the proximity operators of the objective function are computed alternately. The new algorithm can exactly recover the low-rank and sparse components simultaneously, and it is proved to be convergent. Numerical simulations demonstrate the practical utility of the proposed algorithm.
文摘At present, although the human speech separation has achieved fruitful results, it is not ideal for the separation of singing and accompaniment. Based on low-rank and sparse optimization theory, in this paper, we propose a new singing voice separation algorithm called Low-rank, Sparse Representation with pre-learned dictionaries and side Information (LSRi). The algorithm incorporates both the vocal and instrumental spectrograms as sparse matrix and low-rank matrix, meanwhile combines pre-learning dictionary and the reconstructed voice spectrogram form the annotation. Evaluations on the iKala dataset show that the proposed methods are effective and efficient for singing voice separation.
基金funded by the National Basic Research Program of China(973 Program)(No.2011 CB201002)the National Natural Science Foundation of China(No.41374117)the great and special projects(2011ZX05005–005-008HZ and 2011ZX05006-002)
文摘The Gabor and S transforms are frequently used in time-frequency decomposition methods. Constrained by the uncertainty principle, both transforms produce low-resolution time-frequency decomposition results in the time and frequency domains. To improve the resolution of the time-frequency decomposition results, we use the instantaneous frequency distribution function(IFDF) to express the seismic signal. When the instantaneous frequencies of the nonstationary signal satisfy the requirements of the uncertainty principle, the support of IFDF is just the support of the amplitude ridges in the signal obtained using the short-time Fourier transform. Based on this feature, we propose a new iteration algorithm to achieve the sparse time-frequency decomposition of the signal. The iteration algorithm uses the support of the amplitude ridges of the residual signal obtained with the short-time Fourier transform to update the time-frequency components of the signal. The summation of the updated time-frequency components in each iteration is the result of the sparse timefrequency decomposition. Numerical examples show that the proposed method improves the resolution of the time-frequency decomposition results and the accuracy of the analysis of the nonstationary signal. We also use the proposed method to attenuate the ground roll of field seismic data with good results.