At present, although the human speech separation has achieved fruitful results, it is not ideal for the separation of singing and accompaniment. Based on low-rank and sparse optimization theory, in this paper, we prop...At present, although the human speech separation has achieved fruitful results, it is not ideal for the separation of singing and accompaniment. Based on low-rank and sparse optimization theory, in this paper, we propose a new singing voice separation algorithm called Low-rank, Sparse Representation with pre-learned dictionaries and side Information (LSRi). The algorithm incorporates both the vocal and instrumental spectrograms as sparse matrix and low-rank matrix, meanwhile combines pre-learning dictionary and the reconstructed voice spectrogram form the annotation. Evaluations on the iKala dataset show that the proposed methods are effective and efficient for singing voice separation.展开更多
Indoor environment quality(IEQ)is one of the most concerned building performances during the operation stage.The non-uniform spatial distribution of various IEQ parameters in large-scale public buildings has been demo...Indoor environment quality(IEQ)is one of the most concerned building performances during the operation stage.The non-uniform spatial distribution of various IEQ parameters in large-scale public buildings has been demonstrated to be an essential factor affecting occupant comfort and building energy consumption.Currently,IEQ sensors have been widely employed in buildings to monitor thermal,visual,acoustic and air quality.However,there is a lack of effective methods for exploring the typical spatial distribution of indoor environmental quality parameters,which is crucial for assessing and controlling non-uniform indoor environments.In this study,a novel clustering method for extracting IEQ spatial distribution patterns is proposed.Firstly,representation vectors reflecting IEQ distributions in the concerned space are generated based on the low-rank sparse representation.Secondly,a multi-step clustering method,which addressed the problems of the“curse of dimensionality”,is designed to obtain typical IEQ distribution patterns of the entire indoor space.The proposed method was applied to the analysis of indoor thermal environment in Beijing Daxing international airport terminal.As a result,four typical temperature spatial distribution patterns of the terminal were extracted from a four-month monitoring,which had been validated for their good representativeness.These typical patterns revealed typical environmental issues in the terminal,such as long-term localized overheating and temperature increases due to a sudden influx of people.The extracted typical IEQ spatial distribution patterns could assist building operators in effectively assessing the uneven distribution of IEQ space under current environmental conditions,facilitating targeted environmental improvements,optimization of thermal comfort levels,and application of energy-saving measures.展开更多
Multi-view Subspace Clustering (MVSC) emerges as an advanced clustering method, designed to integrate diverse views to uncover a common subspace, enhancing the accuracy and robustness of clustering results. The signif...Multi-view Subspace Clustering (MVSC) emerges as an advanced clustering method, designed to integrate diverse views to uncover a common subspace, enhancing the accuracy and robustness of clustering results. The significance of low-rank prior in MVSC is emphasized, highlighting its role in capturing the global data structure across views for improved performance. However, it faces challenges with outlier sensitivity due to its reliance on the Frobenius norm for error measurement. Addressing this, our paper proposes a Low-Rank Multi-view Subspace Clustering Based on Sparse Regularization (LMVSC- Sparse) approach. Sparse regularization helps in selecting the most relevant features or views for clustering while ignoring irrelevant or noisy ones. This leads to a more efficient and effective representation of the data, improving the clustering accuracy and robustness, especially in the presence of outliers or noisy data. By incorporating sparse regularization, LMVSC-Sparse can effectively handle outlier sensitivity, which is a common challenge in traditional MVSC methods relying solely on low-rank priors. Then Alternating Direction Method of Multipliers (ADMM) algorithm is employed to solve the proposed optimization problems. Our comprehensive experiments demonstrate the efficiency and effectiveness of LMVSC-Sparse, offering a robust alternative to traditional MVSC methods.展开更多
Principal Component Analysis (PCA) is a widely used technique for data analysis and dimensionality reduction, but its sensitivity to feature scale and outliers limits its applicability. Robust Principal Component Anal...Principal Component Analysis (PCA) is a widely used technique for data analysis and dimensionality reduction, but its sensitivity to feature scale and outliers limits its applicability. Robust Principal Component Analysis (RPCA) addresses these limitations by decomposing data into a low-rank matrix capturing the underlying structure and a sparse matrix identifying outliers, enhancing robustness against noise and outliers. This paper introduces a novel RPCA variant, Robust PCA Integrating Sparse and Low-rank Priors (RPCA-SL). Each prior targets a specific aspect of the data’s underlying structure and their combination allows for a more nuanced and accurate separation of the main data components from outliers and noise. Then RPCA-SL is solved by employing a proximal gradient algorithm for improved anomaly detection and data decomposition. Experimental results on simulation and real data demonstrate significant advancements.展开更多
Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero....Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.As a result,many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately.Nevertheless,existing optimizers often focus on locating non-zero variable posi-tions to optimize the binary variables Mask.However,approxi-mating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized.In data mining,it is common to mine frequent itemsets appear-ing together in a dataset to reveal the correlation between data.Inspired by this,we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets(TELSO)to address these SLMOPs.TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence.Experi-mental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms(SLMOEAs)in terms of performance and convergence speed.展开更多
Passive detection of low-slow-small(LSS)targets is easily interfered by direct signal and multipath clutter,and the traditional clutter suppression method has the contradiction between step size and convergence rate.I...Passive detection of low-slow-small(LSS)targets is easily interfered by direct signal and multipath clutter,and the traditional clutter suppression method has the contradiction between step size and convergence rate.In this paper,a frequency domain clutter suppression algorithm based on sparse adaptive filtering is proposed.The pulse compression operation between the error signal and the input reference signal is added to the cost function as a sparsity constraint,and the criterion for filter weight updating is improved to obtain a purer echo signal.At the same time,the step size and penalty factor are brought into the adaptive iteration process,and the input data is used to drive the adaptive changes of parameters such as step size.The proposed algorithm has a small amount of calculation,which improves the robustness to parameters such as step size,reduces the weight error of the filter and has a good clutter suppression performance.展开更多
Low-Rank and Sparse Representation(LRSR)method has gained popularity in Hyperspectral Image(HSI)processing.However,existing LRSR models rarely exploited spectral-spatial classification of HSI.In this paper,we proposed...Low-Rank and Sparse Representation(LRSR)method has gained popularity in Hyperspectral Image(HSI)processing.However,existing LRSR models rarely exploited spectral-spatial classification of HSI.In this paper,we proposed a novel Low-Rank and Sparse Representation with Adaptive Neighborhood Regularization(LRSR-ANR)method for HSI classification.In the proposed method,we first represent the hyperspectral data via LRSR since it combines both sparsity and low-rankness to maintain global and local data structures simultaneously.The LRSR is optimized by using a mixed Gauss-Seidel and Jacobian Alternating Direction Method of Multipliers(M-ADMM),which converges faster than ADMM.Then to incorporate the spatial information,an ANR scheme is designed by combining Euclidean and Cosine distance metrics to reduce the mixed pixels within a neighborhood.Lastly,the predicted labels are determined by jointly considering the homogeneous pixels in the classification rule of the minimum reconstruction error.Experimental results based on three popular hyperspectral images demonstrate that the proposed method outperforms other related methods in terms of classification accuracy and generalization performance.展开更多
Face recognition has attracted great interest due to its importance in many real-world applications. In this paper,we present a novel low-rank sparse representation-based classification(LRSRC) method for robust face r...Face recognition has attracted great interest due to its importance in many real-world applications. In this paper,we present a novel low-rank sparse representation-based classification(LRSRC) method for robust face recognition. Given a set of test samples, LRSRC seeks the lowest-rank and sparsest representation matrix over all training samples. Since low-rank model can reveal the subspace structures of data while sparsity helps to recognize the data class, the obtained test sample representations are both representative and discriminative. Using the representation vector of a test sample, LRSRC classifies the test sample into the class which generates minimal reconstruction error. Experimental results on several face image databases show the effectiveness and robustness of LRSRC in face image recognition.展开更多
Sparse large-scale multi-objective optimization problems(SLMOPs)are common in science and engineering.However,the large-scale problem represents the high dimensionality of the decision space,requiring algorithms to tr...Sparse large-scale multi-objective optimization problems(SLMOPs)are common in science and engineering.However,the large-scale problem represents the high dimensionality of the decision space,requiring algorithms to traverse vast expanse with limited computational resources.Furthermore,in the context of sparse,most variables in Pareto optimal solutions are zero,making it difficult for algorithms to identify non-zero variables efficiently.This paper is dedicated to addressing the challenges posed by SLMOPs.To start,we introduce innovative objective functions customized to mine maximum and minimum candidate sets.This substantial enhancement dramatically improves the efficacy of frequent pattern mining.In this way,selecting candidate sets is no longer based on the quantity of nonzero variables they contain but on a higher proportion of nonzero variables within specific dimensions.Additionally,we unveil a novel approach to association rule mining,which delves into the intricate relationships between non-zero variables.This novel methodology aids in identifying sparse distributions that can potentially expedite reductions in the objective function value.We extensively tested our algorithm across eight benchmark problems and four real-world SLMOPs.The results demonstrate that our approach achieves competitive solutions across various challenges.展开更多
For data mining tasks on large-scale data,feature selection is a pivotal stage that plays an important role in removing redundant or irrelevant features while improving classifier performance.Traditional wrapper featu...For data mining tasks on large-scale data,feature selection is a pivotal stage that plays an important role in removing redundant or irrelevant features while improving classifier performance.Traditional wrapper feature selection methodologies typically require extensive model training and evaluation,which cannot deliver desired outcomes within a reasonable computing time.In this paper,an innovative wrapper approach termed Contribution Tracking Feature Selection(CTFS)is proposed for feature selection of large-scale data,which can locate informative features without population-level evolution.In other words,fewer evaluations are needed for CTFS compared to other evolutionary methods.We initially introduce a refined sparse autoencoder to assess the prominence of each feature in the subsequent wrapper method.Subsequently,we utilize an enhanced wrapper feature selection technique that merges Mutual Information(MI)with individual feature contributions.Finally,a fine-tuning contribution tracking mechanism discerns informative features within the optimal feature subset,operating via a dominance accumulation mechanism.Experimental results for multiple classification performance metrics demonstrate that the proposed method effectively yields smaller feature subsets without degrading classification performance in an acceptable runtime compared to state-of-the-art algorithms across most large-scale benchmark datasets.展开更多
In this paper,we reconstruct strongly-decaying block sparse signals by the block generalized orthogonal matching pursuit(BgOMP)algorithm in the l2-bounded noise case.Under some restraints on the minimum magnitude of t...In this paper,we reconstruct strongly-decaying block sparse signals by the block generalized orthogonal matching pursuit(BgOMP)algorithm in the l2-bounded noise case.Under some restraints on the minimum magnitude of the nonzero elements of the strongly-decaying block sparse signal,if the sensing matrix satisfies the the block restricted isometry property(block-RIP),then arbitrary strongly-decaying block sparse signals can be accurately and steadily reconstructed by the BgOMP algorithm in iterations.Furthermore,we conjecture that this condition is sharp.展开更多
As an important part of rotating machinery,gearboxes often fail due to their complex working conditions and harsh working environment.Therefore,it is very necessary to effectively extract the fault features of the gea...As an important part of rotating machinery,gearboxes often fail due to their complex working conditions and harsh working environment.Therefore,it is very necessary to effectively extract the fault features of the gearboxes.Gearbox fault signals usually contain multiple characteristic components and are accompanied by strong noise interference.Traditional sparse modeling methods are based on synthesis models,and there are few studies on analysis and balance models.In this paper,a balance nonconvex regularized sparse decomposition method is proposed,which based on a balance model and an arctangent nonconvex penalty function.The sparse dictionary is constructed by using Tunable Q-Factor Wavelet Transform(TQWT)that satisfies the tight frame condition,which can achieve efficient and fast solution.It is optimized and solved by alternating direction method of multipliers(ADMM)algorithm,and the non-convex regularized sparse decomposition algorithm of synthetic and analytical models are given.Through simulation experiments,the determination methods of regularization parameters and balance parameters are given,and compared with the L1 norm regularization sparse decomposition method under the three models.Simulation analysis and engineering experimental signal analysis verify the effectiveness and superiority of the proposed method.展开更多
Although many multi-view clustering(MVC) algorithms with acceptable performances have been presented, to the best of our knowledge, nearly all of them need to be fed with the correct number of clusters. In addition, t...Although many multi-view clustering(MVC) algorithms with acceptable performances have been presented, to the best of our knowledge, nearly all of them need to be fed with the correct number of clusters. In addition, these existing algorithms create only the hard and fuzzy partitions for multi-view objects,which are often located in highly-overlapping areas of multi-view feature space. The adoption of hard and fuzzy partition ignores the ambiguity and uncertainty in the assignment of objects, likely leading to performance degradation. To address these issues, we propose a novel sparse reconstructive multi-view evidential clustering algorithm(SRMVEC). Based on a sparse reconstructive procedure, SRMVEC learns a shared affinity matrix across views, and maps multi-view objects to a 2-dimensional humanreadable chart by calculating 2 newly defined mathematical metrics for each object. From this chart, users can detect the number of clusters and select several objects existing in the dataset as cluster centers. Then, SRMVEC derives a credal partition under the framework of evidence theory, improving the fault tolerance of clustering. Ablation studies show the benefits of adopting the sparse reconstructive procedure and evidence theory. Besides,SRMVEC delivers effectiveness on benchmark datasets by outperforming some state-of-the-art methods.展开更多
Quantized training has been proven to be a prominent method to achieve deep neural network training under limited computational resources.It uses low bit-width arithmetics with a proper scaling factor to achieve negli...Quantized training has been proven to be a prominent method to achieve deep neural network training under limited computational resources.It uses low bit-width arithmetics with a proper scaling factor to achieve negligible accuracy loss.Cambricon-Q is the ASIC design proposed to efficiently support quantized training,and achieves significant performance improvement.However,there are still two caveats in the design.First,Cambricon-Q with different hardware specifications may lead to different numerical errors,resulting in non-reproducible behaviors which may become a major concern in critical applications.Second,Cambricon-Q cannot leverage data sparsity,where considerable cycles could still be squeezed out.To address the caveats,the acceleration core of Cambricon-Q is redesigned to support fine-grained irregular data processing.The new design not only enables acceleration on sparse data,but also enables performing local dynamic quantization by contiguous value ranges(which is hardware independent),instead of contiguous addresses(which is dependent on hardware factors).Experimental results show that the accuracy loss of the method still keeps negligible,and the accelerator achieves 1.61×performance improvement over Cambricon-Q,with about 10%energy increase.展开更多
The task of dividing corrupted-data into their respective subspaces can be well illustrated,both theoretically and numerically,by recovering low-rank and sparse-column components of a given matrix.Generally,it can be ...The task of dividing corrupted-data into their respective subspaces can be well illustrated,both theoretically and numerically,by recovering low-rank and sparse-column components of a given matrix.Generally,it can be characterized as a matrix and a 2,1-norm involved convex minimization problem.However,solving the resulting problem is full of challenges due to the non-smoothness of the objective function.One of the earliest solvers is an 3-block alternating direction method of multipliers(ADMM)which updates each variable in a Gauss-Seidel manner.In this paper,we present three variants of ADMM for the 3-block separable minimization problem.More preciously,whenever one variable is derived,the resulting problems can be regarded as a convex minimization with 2 blocks,and can be solved immediately using the standard ADMM.If the inner iteration loops only once,the iterative scheme reduces to the ADMM with updates in a Gauss-Seidel manner.If the solution from the inner iteration is assumed to be exact,the convergence can be deduced easily in the literature.The performance comparisons with a couple of recently designed solvers illustrate that the proposed methods are effective and competitive.展开更多
The proportionate recursive least squares(PRLS)algorithm has shown faster convergence and better performance than both proportionate updating(PU)mechanism based least mean squares(LMS)algorithms and RLS algorithms wit...The proportionate recursive least squares(PRLS)algorithm has shown faster convergence and better performance than both proportionate updating(PU)mechanism based least mean squares(LMS)algorithms and RLS algorithms with a sparse regularization term.In this paper,we propose a variable forgetting factor(VFF)PRLS algorithm with a sparse penalty,e.g.,l_(1)-norm,for sparse identification.To reduce the computation complexity of the proposed algorithm,a fast implementation method based on dichotomous coordinate descent(DCD)algorithm is also derived.Simulation results indicate superior performance of the proposed algorithm.展开更多
Signal decomposition and multiscale signal analysis provide many useful tools for timefrequency analysis.We proposed a random feature method for analyzing time-series data by constructing a sparse approximation to the...Signal decomposition and multiscale signal analysis provide many useful tools for timefrequency analysis.We proposed a random feature method for analyzing time-series data by constructing a sparse approximation to the spectrogram.The randomization is both in the time window locations and the frequency sampling,which lowers the overall sampling and computational cost.The sparsification of the spectrogram leads to a sharp separation between time-frequency clusters which makes it easier to identify intrinsic modes,and thus leads to a new data-driven mode decomposition.The applications include signal representation,outlier removal,and mode decomposition.On benchmark tests,we show that our approach outperforms other state-of-the-art decomposition methods.展开更多
Designing a sparse array with reduced transmit/receive modules(TRMs)is vital for some applications where the antenna system’s size,weight,allowed operating space,and cost are limited.Sparse arrays exhibit distinct ar...Designing a sparse array with reduced transmit/receive modules(TRMs)is vital for some applications where the antenna system’s size,weight,allowed operating space,and cost are limited.Sparse arrays exhibit distinct architectures,roughly classified into three categories:Thinned arrays,nonuniformly spaced arrays,and clustered arrays.While numerous advanced synthesis methods have been presented for the three types of sparse arrays in recent years,a comprehensive review of the latest development in sparse array synthesis is lacking.This work aims to fill this gap by thoroughly summarizing these techniques.The study includes synthesis examples to facilitate a comparative analysis of different techniques in terms of both accuracy and efficiency.Thus,this review is intended to assist researchers and engineers in related fields,offering a clear understanding of the development and distinctions among sparse array synthesis techniques.展开更多
Wayside monitoring is a promising cost-effective alternative to predict damage in the rolling stock. The main goal of this work is to present an unsupervised methodology to identify out-of-roundness(OOR) damage wheels...Wayside monitoring is a promising cost-effective alternative to predict damage in the rolling stock. The main goal of this work is to present an unsupervised methodology to identify out-of-roundness(OOR) damage wheels, such as wheel flats and polygonal wheels. This automatic damage identification algorithm is based on the vertical acceleration evaluated on the rails using a virtual wayside monitoring system and involves the application of a two-step procedure. The first step aims to define a confidence boundary by using(healthy) measurements evaluated on the rail constituting a baseline. The second step of the procedure involves classifying damage of predefined scenarios with different levels of severities. The proposed procedure is based on a machine learning methodology and includes the following stages:(1) data collection,(2) damage-sensitive feature extraction from the acquired responses using a neural network model, i.e., the sparse autoencoder(SAE),(3) data fusion based on the Mahalanobis distance, and(4) unsupervised feature classification by implementing outlier and cluster analysis. This procedure considers baseline responses at different speeds and rail irregularities to train the SAE model. Then, the trained SAE is capable to reconstruct test responses(not trained) allowing to compute the accumulative difference between original and reconstructed signals. The results prove the efficiency of the proposed approach in identifying the two most common types of OOR in railway wheels.展开更多
文摘At present, although the human speech separation has achieved fruitful results, it is not ideal for the separation of singing and accompaniment. Based on low-rank and sparse optimization theory, in this paper, we propose a new singing voice separation algorithm called Low-rank, Sparse Representation with pre-learned dictionaries and side Information (LSRi). The algorithm incorporates both the vocal and instrumental spectrograms as sparse matrix and low-rank matrix, meanwhile combines pre-learning dictionary and the reconstructed voice spectrogram form the annotation. Evaluations on the iKala dataset show that the proposed methods are effective and efficient for singing voice separation.
基金the China National Key Research and Development Program(Grant No.2022YFC3801300)the Young Scientists Fund of the National Natural Science Foundation of China(Grant No.52208113)+1 种基金the Key Program of National Natural Science Foundation of China(Grant No.52130803)the Hang Lung Center for Real Estate,Tsinghua University.The authors also express special thanks to the Command Center of Beijing Daxing International Airport for their long-term and strong support to this research.
文摘Indoor environment quality(IEQ)is one of the most concerned building performances during the operation stage.The non-uniform spatial distribution of various IEQ parameters in large-scale public buildings has been demonstrated to be an essential factor affecting occupant comfort and building energy consumption.Currently,IEQ sensors have been widely employed in buildings to monitor thermal,visual,acoustic and air quality.However,there is a lack of effective methods for exploring the typical spatial distribution of indoor environmental quality parameters,which is crucial for assessing and controlling non-uniform indoor environments.In this study,a novel clustering method for extracting IEQ spatial distribution patterns is proposed.Firstly,representation vectors reflecting IEQ distributions in the concerned space are generated based on the low-rank sparse representation.Secondly,a multi-step clustering method,which addressed the problems of the“curse of dimensionality”,is designed to obtain typical IEQ distribution patterns of the entire indoor space.The proposed method was applied to the analysis of indoor thermal environment in Beijing Daxing international airport terminal.As a result,four typical temperature spatial distribution patterns of the terminal were extracted from a four-month monitoring,which had been validated for their good representativeness.These typical patterns revealed typical environmental issues in the terminal,such as long-term localized overheating and temperature increases due to a sudden influx of people.The extracted typical IEQ spatial distribution patterns could assist building operators in effectively assessing the uneven distribution of IEQ space under current environmental conditions,facilitating targeted environmental improvements,optimization of thermal comfort levels,and application of energy-saving measures.
文摘Multi-view Subspace Clustering (MVSC) emerges as an advanced clustering method, designed to integrate diverse views to uncover a common subspace, enhancing the accuracy and robustness of clustering results. The significance of low-rank prior in MVSC is emphasized, highlighting its role in capturing the global data structure across views for improved performance. However, it faces challenges with outlier sensitivity due to its reliance on the Frobenius norm for error measurement. Addressing this, our paper proposes a Low-Rank Multi-view Subspace Clustering Based on Sparse Regularization (LMVSC- Sparse) approach. Sparse regularization helps in selecting the most relevant features or views for clustering while ignoring irrelevant or noisy ones. This leads to a more efficient and effective representation of the data, improving the clustering accuracy and robustness, especially in the presence of outliers or noisy data. By incorporating sparse regularization, LMVSC-Sparse can effectively handle outlier sensitivity, which is a common challenge in traditional MVSC methods relying solely on low-rank priors. Then Alternating Direction Method of Multipliers (ADMM) algorithm is employed to solve the proposed optimization problems. Our comprehensive experiments demonstrate the efficiency and effectiveness of LMVSC-Sparse, offering a robust alternative to traditional MVSC methods.
文摘Principal Component Analysis (PCA) is a widely used technique for data analysis and dimensionality reduction, but its sensitivity to feature scale and outliers limits its applicability. Robust Principal Component Analysis (RPCA) addresses these limitations by decomposing data into a low-rank matrix capturing the underlying structure and a sparse matrix identifying outliers, enhancing robustness against noise and outliers. This paper introduces a novel RPCA variant, Robust PCA Integrating Sparse and Low-rank Priors (RPCA-SL). Each prior targets a specific aspect of the data’s underlying structure and their combination allows for a more nuanced and accurate separation of the main data components from outliers and noise. Then RPCA-SL is solved by employing a proximal gradient algorithm for improved anomaly detection and data decomposition. Experimental results on simulation and real data demonstrate significant advancements.
基金supported by the Scientific Research Project of Xiang Jiang Lab(22XJ02003)the University Fundamental Research Fund(23-ZZCX-JDZ-28)+5 种基金the National Science Fund for Outstanding Young Scholars(62122093)the National Natural Science Foundation of China(72071205)the Hunan Graduate Research Innovation Project(ZC23112101-10)the Hunan Natural Science Foundation Regional Joint Project(2023JJ50490)the Science and Technology Project for Young and Middle-aged Talents of Hunan(2023TJ-Z03)the Science and Technology Innovation Program of Humnan Province(2023RC1002)。
文摘Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.As a result,many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately.Nevertheless,existing optimizers often focus on locating non-zero variable posi-tions to optimize the binary variables Mask.However,approxi-mating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized.In data mining,it is common to mine frequent itemsets appear-ing together in a dataset to reveal the correlation between data.Inspired by this,we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets(TELSO)to address these SLMOPs.TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence.Experi-mental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms(SLMOEAs)in terms of performance and convergence speed.
文摘Passive detection of low-slow-small(LSS)targets is easily interfered by direct signal and multipath clutter,and the traditional clutter suppression method has the contradiction between step size and convergence rate.In this paper,a frequency domain clutter suppression algorithm based on sparse adaptive filtering is proposed.The pulse compression operation between the error signal and the input reference signal is added to the cost function as a sparsity constraint,and the criterion for filter weight updating is improved to obtain a purer echo signal.At the same time,the step size and penalty factor are brought into the adaptive iteration process,and the input data is used to drive the adaptive changes of parameters such as step size.The proposed algorithm has a small amount of calculation,which improves the robustness to parameters such as step size,reduces the weight error of the filter and has a good clutter suppression performance.
基金National Natural Foundation of China(No.41971279)Fundamental Research Funds of the Central Universities(No.B200202012)。
文摘Low-Rank and Sparse Representation(LRSR)method has gained popularity in Hyperspectral Image(HSI)processing.However,existing LRSR models rarely exploited spectral-spatial classification of HSI.In this paper,we proposed a novel Low-Rank and Sparse Representation with Adaptive Neighborhood Regularization(LRSR-ANR)method for HSI classification.In the proposed method,we first represent the hyperspectral data via LRSR since it combines both sparsity and low-rankness to maintain global and local data structures simultaneously.The LRSR is optimized by using a mixed Gauss-Seidel and Jacobian Alternating Direction Method of Multipliers(M-ADMM),which converges faster than ADMM.Then to incorporate the spatial information,an ANR scheme is designed by combining Euclidean and Cosine distance metrics to reduce the mixed pixels within a neighborhood.Lastly,the predicted labels are determined by jointly considering the homogeneous pixels in the classification rule of the minimum reconstruction error.Experimental results based on three popular hyperspectral images demonstrate that the proposed method outperforms other related methods in terms of classification accuracy and generalization performance.
基金supported by National Natural Science Foundation of China(No.61374134)the key Scientific Research Project of Universities in Henan Province,China(No.15A413009)
文摘Face recognition has attracted great interest due to its importance in many real-world applications. In this paper,we present a novel low-rank sparse representation-based classification(LRSRC) method for robust face recognition. Given a set of test samples, LRSRC seeks the lowest-rank and sparsest representation matrix over all training samples. Since low-rank model can reveal the subspace structures of data while sparsity helps to recognize the data class, the obtained test sample representations are both representative and discriminative. Using the representation vector of a test sample, LRSRC classifies the test sample into the class which generates minimal reconstruction error. Experimental results on several face image databases show the effectiveness and robustness of LRSRC in face image recognition.
基金support by the Open Project of Xiangjiang Laboratory(22XJ02003)the University Fundamental Research Fund(23-ZZCX-JDZ-28,ZK21-07)+5 种基金the National Science Fund for Outstanding Young Scholars(62122093)the National Natural Science Foundation of China(72071205)the Hunan Graduate Research Innovation Project(CX20230074)the Hunan Natural Science Foundation Regional Joint Project(2023JJ50490)the Science and Technology Project for Young and Middle-aged Talents of Hunan(2023TJZ03)the Science and Technology Innovation Program of Humnan Province(2023RC1002).
文摘Sparse large-scale multi-objective optimization problems(SLMOPs)are common in science and engineering.However,the large-scale problem represents the high dimensionality of the decision space,requiring algorithms to traverse vast expanse with limited computational resources.Furthermore,in the context of sparse,most variables in Pareto optimal solutions are zero,making it difficult for algorithms to identify non-zero variables efficiently.This paper is dedicated to addressing the challenges posed by SLMOPs.To start,we introduce innovative objective functions customized to mine maximum and minimum candidate sets.This substantial enhancement dramatically improves the efficacy of frequent pattern mining.In this way,selecting candidate sets is no longer based on the quantity of nonzero variables they contain but on a higher proportion of nonzero variables within specific dimensions.Additionally,we unveil a novel approach to association rule mining,which delves into the intricate relationships between non-zero variables.This novel methodology aids in identifying sparse distributions that can potentially expedite reductions in the objective function value.We extensively tested our algorithm across eight benchmark problems and four real-world SLMOPs.The results demonstrate that our approach achieves competitive solutions across various challenges.
基金supported in part by the National Key Research and Development Program of China under Grant(No.2021YFB3300900)the NSFC Key Supported Project of the Major Research Plan under Grant(No.92267206)+2 种基金the National Natural Science Foundation of China under Grant(Nos.72201052,62032013,62173076)the Fundamental Research Funds for the Central Universities under Grant(No.N2204017)the Fundamental Research Funds for State Key Laboratory of Synthetical Automation for Process Industries under Grant(No.2013ZCX11).
文摘For data mining tasks on large-scale data,feature selection is a pivotal stage that plays an important role in removing redundant or irrelevant features while improving classifier performance.Traditional wrapper feature selection methodologies typically require extensive model training and evaluation,which cannot deliver desired outcomes within a reasonable computing time.In this paper,an innovative wrapper approach termed Contribution Tracking Feature Selection(CTFS)is proposed for feature selection of large-scale data,which can locate informative features without population-level evolution.In other words,fewer evaluations are needed for CTFS compared to other evolutionary methods.We initially introduce a refined sparse autoencoder to assess the prominence of each feature in the subsequent wrapper method.Subsequently,we utilize an enhanced wrapper feature selection technique that merges Mutual Information(MI)with individual feature contributions.Finally,a fine-tuning contribution tracking mechanism discerns informative features within the optimal feature subset,operating via a dominance accumulation mechanism.Experimental results for multiple classification performance metrics demonstrate that the proposed method effectively yields smaller feature subsets without degrading classification performance in an acceptable runtime compared to state-of-the-art algorithms across most large-scale benchmark datasets.
基金supported by Natural Science Foundation of China(62071262)the K.C.Wong Magna Fund at Ningbo University.
文摘In this paper,we reconstruct strongly-decaying block sparse signals by the block generalized orthogonal matching pursuit(BgOMP)algorithm in the l2-bounded noise case.Under some restraints on the minimum magnitude of the nonzero elements of the strongly-decaying block sparse signal,if the sensing matrix satisfies the the block restricted isometry property(block-RIP),then arbitrary strongly-decaying block sparse signals can be accurately and steadily reconstructed by the BgOMP algorithm in iterations.Furthermore,we conjecture that this condition is sharp.
基金Supported by National Natural Science Foundation of China(Grant Nos.52075353,52007128).
文摘As an important part of rotating machinery,gearboxes often fail due to their complex working conditions and harsh working environment.Therefore,it is very necessary to effectively extract the fault features of the gearboxes.Gearbox fault signals usually contain multiple characteristic components and are accompanied by strong noise interference.Traditional sparse modeling methods are based on synthesis models,and there are few studies on analysis and balance models.In this paper,a balance nonconvex regularized sparse decomposition method is proposed,which based on a balance model and an arctangent nonconvex penalty function.The sparse dictionary is constructed by using Tunable Q-Factor Wavelet Transform(TQWT)that satisfies the tight frame condition,which can achieve efficient and fast solution.It is optimized and solved by alternating direction method of multipliers(ADMM)algorithm,and the non-convex regularized sparse decomposition algorithm of synthetic and analytical models are given.Through simulation experiments,the determination methods of regularization parameters and balance parameters are given,and compared with the L1 norm regularization sparse decomposition method under the three models.Simulation analysis and engineering experimental signal analysis verify the effectiveness and superiority of the proposed method.
基金supported in part by NUS startup grantthe National Natural Science Foundation of China (52076037)。
文摘Although many multi-view clustering(MVC) algorithms with acceptable performances have been presented, to the best of our knowledge, nearly all of them need to be fed with the correct number of clusters. In addition, these existing algorithms create only the hard and fuzzy partitions for multi-view objects,which are often located in highly-overlapping areas of multi-view feature space. The adoption of hard and fuzzy partition ignores the ambiguity and uncertainty in the assignment of objects, likely leading to performance degradation. To address these issues, we propose a novel sparse reconstructive multi-view evidential clustering algorithm(SRMVEC). Based on a sparse reconstructive procedure, SRMVEC learns a shared affinity matrix across views, and maps multi-view objects to a 2-dimensional humanreadable chart by calculating 2 newly defined mathematical metrics for each object. From this chart, users can detect the number of clusters and select several objects existing in the dataset as cluster centers. Then, SRMVEC derives a credal partition under the framework of evidence theory, improving the fault tolerance of clustering. Ablation studies show the benefits of adopting the sparse reconstructive procedure and evidence theory. Besides,SRMVEC delivers effectiveness on benchmark datasets by outperforming some state-of-the-art methods.
基金the National Key Research and Devecopment Program of China(No.2022YFB4501601)the National Natural Science Foundation of China(No.62102398,U20A20227,62222214,62002338,U22A2028,U19B2019)+1 种基金the Chinese Academy of Sciences Project for Young Scientists in Basic Research(YSBR-029)Youth Innovation Promotion Association Chinese Academy of Sciences。
文摘Quantized training has been proven to be a prominent method to achieve deep neural network training under limited computational resources.It uses low bit-width arithmetics with a proper scaling factor to achieve negligible accuracy loss.Cambricon-Q is the ASIC design proposed to efficiently support quantized training,and achieves significant performance improvement.However,there are still two caveats in the design.First,Cambricon-Q with different hardware specifications may lead to different numerical errors,resulting in non-reproducible behaviors which may become a major concern in critical applications.Second,Cambricon-Q cannot leverage data sparsity,where considerable cycles could still be squeezed out.To address the caveats,the acceleration core of Cambricon-Q is redesigned to support fine-grained irregular data processing.The new design not only enables acceleration on sparse data,but also enables performing local dynamic quantization by contiguous value ranges(which is hardware independent),instead of contiguous addresses(which is dependent on hardware factors).Experimental results show that the accuracy loss of the method still keeps negligible,and the accelerator achieves 1.61×performance improvement over Cambricon-Q,with about 10%energy increase.
基金Supported by the National Natural Science Foundation of China(Grant No.11971149,11871381)Natural Science Foundation of Henan Province for Youth(Grant No.202300410146)。
文摘The task of dividing corrupted-data into their respective subspaces can be well illustrated,both theoretically and numerically,by recovering low-rank and sparse-column components of a given matrix.Generally,it can be characterized as a matrix and a 2,1-norm involved convex minimization problem.However,solving the resulting problem is full of challenges due to the non-smoothness of the objective function.One of the earliest solvers is an 3-block alternating direction method of multipliers(ADMM)which updates each variable in a Gauss-Seidel manner.In this paper,we present three variants of ADMM for the 3-block separable minimization problem.More preciously,whenever one variable is derived,the resulting problems can be regarded as a convex minimization with 2 blocks,and can be solved immediately using the standard ADMM.If the inner iteration loops only once,the iterative scheme reduces to the ADMM with updates in a Gauss-Seidel manner.If the solution from the inner iteration is assumed to be exact,the convergence can be deduced easily in the literature.The performance comparisons with a couple of recently designed solvers illustrate that the proposed methods are effective and competitive.
基金supported by National Key Research and Development Program of China(2020YFB0505803)National Key Research and Development Program of China(2016YFB0501700)。
文摘The proportionate recursive least squares(PRLS)algorithm has shown faster convergence and better performance than both proportionate updating(PU)mechanism based least mean squares(LMS)algorithms and RLS algorithms with a sparse regularization term.In this paper,we propose a variable forgetting factor(VFF)PRLS algorithm with a sparse penalty,e.g.,l_(1)-norm,for sparse identification.To reduce the computation complexity of the proposed algorithm,a fast implementation method based on dichotomous coordinate descent(DCD)algorithm is also derived.Simulation results indicate superior performance of the proposed algorithm.
基金supported in part by the NSERC RGPIN 50503-10842supported in part by the AFOSR MURI FA9550-21-1-0084the NSF DMS-1752116.
文摘Signal decomposition and multiscale signal analysis provide many useful tools for timefrequency analysis.We proposed a random feature method for analyzing time-series data by constructing a sparse approximation to the spectrogram.The randomization is both in the time window locations and the frequency sampling,which lowers the overall sampling and computational cost.The sparsification of the spectrogram leads to a sharp separation between time-frequency clusters which makes it easier to identify intrinsic modes,and thus leads to a new data-driven mode decomposition.The applications include signal representation,outlier removal,and mode decomposition.On benchmark tests,we show that our approach outperforms other state-of-the-art decomposition methods.
基金supported by the National Natural Science Foundation of China under Grant No.U2341208.
文摘Designing a sparse array with reduced transmit/receive modules(TRMs)is vital for some applications where the antenna system’s size,weight,allowed operating space,and cost are limited.Sparse arrays exhibit distinct architectures,roughly classified into three categories:Thinned arrays,nonuniformly spaced arrays,and clustered arrays.While numerous advanced synthesis methods have been presented for the three types of sparse arrays in recent years,a comprehensive review of the latest development in sparse array synthesis is lacking.This work aims to fill this gap by thoroughly summarizing these techniques.The study includes synthesis examples to facilitate a comparative analysis of different techniques in terms of both accuracy and efficiency.Thus,this review is intended to assist researchers and engineers in related fields,offering a clear understanding of the development and distinctions among sparse array synthesis techniques.
基金a result of project WAY4SafeRail—Wayside monitoring system FOR SAFE RAIL transportation, with reference NORTE-01-0247-FEDER-069595co-funded by the European Regional Development Fund (ERDF), through the North Portugal Regional Operational Programme (NORTE2020), under the PORTUGAL 2020 Partnership Agreement+3 种基金financially supported by Base Funding-UIDB/04708/2020Programmatic Funding-UIDP/04708/2020 of the CONSTRUCT—Instituto de Estruturas e Constru??esfunded by national funds through the FCT/ MCTES (PIDDAC)Grant No. 2021.04272. CEECIND from the Stimulus of Scientific Employment, Individual Support (CEECIND) - 4th Edition provided by “FCT – Funda??o para a Ciência, DOI : https:// doi. org/ 10. 54499/ 2021. 04272. CEECI ND/ CP1679/ CT0003”。
文摘Wayside monitoring is a promising cost-effective alternative to predict damage in the rolling stock. The main goal of this work is to present an unsupervised methodology to identify out-of-roundness(OOR) damage wheels, such as wheel flats and polygonal wheels. This automatic damage identification algorithm is based on the vertical acceleration evaluated on the rails using a virtual wayside monitoring system and involves the application of a two-step procedure. The first step aims to define a confidence boundary by using(healthy) measurements evaluated on the rail constituting a baseline. The second step of the procedure involves classifying damage of predefined scenarios with different levels of severities. The proposed procedure is based on a machine learning methodology and includes the following stages:(1) data collection,(2) damage-sensitive feature extraction from the acquired responses using a neural network model, i.e., the sparse autoencoder(SAE),(3) data fusion based on the Mahalanobis distance, and(4) unsupervised feature classification by implementing outlier and cluster analysis. This procedure considers baseline responses at different speeds and rail irregularities to train the SAE model. Then, the trained SAE is capable to reconstruct test responses(not trained) allowing to compute the accumulative difference between original and reconstructed signals. The results prove the efficiency of the proposed approach in identifying the two most common types of OOR in railway wheels.