Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero....Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.As a result,many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately.Nevertheless,existing optimizers often focus on locating non-zero variable posi-tions to optimize the binary variables Mask.However,approxi-mating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized.In data mining,it is common to mine frequent itemsets appear-ing together in a dataset to reveal the correlation between data.Inspired by this,we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets(TELSO)to address these SLMOPs.TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence.Experi-mental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms(SLMOEAs)in terms of performance and convergence speed.展开更多
Sparse large-scale multi-objective optimization problems(SLMOPs)are common in science and engineering.However,the large-scale problem represents the high dimensionality of the decision space,requiring algorithms to tr...Sparse large-scale multi-objective optimization problems(SLMOPs)are common in science and engineering.However,the large-scale problem represents the high dimensionality of the decision space,requiring algorithms to traverse vast expanse with limited computational resources.Furthermore,in the context of sparse,most variables in Pareto optimal solutions are zero,making it difficult for algorithms to identify non-zero variables efficiently.This paper is dedicated to addressing the challenges posed by SLMOPs.To start,we introduce innovative objective functions customized to mine maximum and minimum candidate sets.This substantial enhancement dramatically improves the efficacy of frequent pattern mining.In this way,selecting candidate sets is no longer based on the quantity of nonzero variables they contain but on a higher proportion of nonzero variables within specific dimensions.Additionally,we unveil a novel approach to association rule mining,which delves into the intricate relationships between non-zero variables.This novel methodology aids in identifying sparse distributions that can potentially expedite reductions in the objective function value.We extensively tested our algorithm across eight benchmark problems and four real-world SLMOPs.The results demonstrate that our approach achieves competitive solutions across various challenges.展开更多
In this paper,we reconstruct strongly-decaying block sparse signals by the block generalized orthogonal matching pursuit(BgOMP)algorithm in the l2-bounded noise case.Under some restraints on the minimum magnitude of t...In this paper,we reconstruct strongly-decaying block sparse signals by the block generalized orthogonal matching pursuit(BgOMP)algorithm in the l2-bounded noise case.Under some restraints on the minimum magnitude of the nonzero elements of the strongly-decaying block sparse signal,if the sensing matrix satisfies the the block restricted isometry property(block-RIP),then arbitrary strongly-decaying block sparse signals can be accurately and steadily reconstructed by the BgOMP algorithm in iterations.Furthermore,we conjecture that this condition is sharp.展开更多
Although many multi-view clustering(MVC) algorithms with acceptable performances have been presented, to the best of our knowledge, nearly all of them need to be fed with the correct number of clusters. In addition, t...Although many multi-view clustering(MVC) algorithms with acceptable performances have been presented, to the best of our knowledge, nearly all of them need to be fed with the correct number of clusters. In addition, these existing algorithms create only the hard and fuzzy partitions for multi-view objects,which are often located in highly-overlapping areas of multi-view feature space. The adoption of hard and fuzzy partition ignores the ambiguity and uncertainty in the assignment of objects, likely leading to performance degradation. To address these issues, we propose a novel sparse reconstructive multi-view evidential clustering algorithm(SRMVEC). Based on a sparse reconstructive procedure, SRMVEC learns a shared affinity matrix across views, and maps multi-view objects to a 2-dimensional humanreadable chart by calculating 2 newly defined mathematical metrics for each object. From this chart, users can detect the number of clusters and select several objects existing in the dataset as cluster centers. Then, SRMVEC derives a credal partition under the framework of evidence theory, improving the fault tolerance of clustering. Ablation studies show the benefits of adopting the sparse reconstructive procedure and evidence theory. Besides,SRMVEC delivers effectiveness on benchmark datasets by outperforming some state-of-the-art methods.展开更多
Quantized training has been proven to be a prominent method to achieve deep neural network training under limited computational resources.It uses low bit-width arithmetics with a proper scaling factor to achieve negli...Quantized training has been proven to be a prominent method to achieve deep neural network training under limited computational resources.It uses low bit-width arithmetics with a proper scaling factor to achieve negligible accuracy loss.Cambricon-Q is the ASIC design proposed to efficiently support quantized training,and achieves significant performance improvement.However,there are still two caveats in the design.First,Cambricon-Q with different hardware specifications may lead to different numerical errors,resulting in non-reproducible behaviors which may become a major concern in critical applications.Second,Cambricon-Q cannot leverage data sparsity,where considerable cycles could still be squeezed out.To address the caveats,the acceleration core of Cambricon-Q is redesigned to support fine-grained irregular data processing.The new design not only enables acceleration on sparse data,but also enables performing local dynamic quantization by contiguous value ranges(which is hardware independent),instead of contiguous addresses(which is dependent on hardware factors).Experimental results show that the accuracy loss of the method still keeps negligible,and the accelerator achieves 1.61×performance improvement over Cambricon-Q,with about 10%energy increase.展开更多
Passive detection of low-slow-small(LSS)targets is easily interfered by direct signal and multipath clutter,and the traditional clutter suppression method has the contradiction between step size and convergence rate.I...Passive detection of low-slow-small(LSS)targets is easily interfered by direct signal and multipath clutter,and the traditional clutter suppression method has the contradiction between step size and convergence rate.In this paper,a frequency domain clutter suppression algorithm based on sparse adaptive filtering is proposed.The pulse compression operation between the error signal and the input reference signal is added to the cost function as a sparsity constraint,and the criterion for filter weight updating is improved to obtain a purer echo signal.At the same time,the step size and penalty factor are brought into the adaptive iteration process,and the input data is used to drive the adaptive changes of parameters such as step size.The proposed algorithm has a small amount of calculation,which improves the robustness to parameters such as step size,reduces the weight error of the filter and has a good clutter suppression performance.展开更多
The proportionate recursive least squares(PRLS)algorithm has shown faster convergence and better performance than both proportionate updating(PU)mechanism based least mean squares(LMS)algorithms and RLS algorithms wit...The proportionate recursive least squares(PRLS)algorithm has shown faster convergence and better performance than both proportionate updating(PU)mechanism based least mean squares(LMS)algorithms and RLS algorithms with a sparse regularization term.In this paper,we propose a variable forgetting factor(VFF)PRLS algorithm with a sparse penalty,e.g.,l_(1)-norm,for sparse identification.To reduce the computation complexity of the proposed algorithm,a fast implementation method based on dichotomous coordinate descent(DCD)algorithm is also derived.Simulation results indicate superior performance of the proposed algorithm.展开更多
Signal decomposition and multiscale signal analysis provide many useful tools for timefrequency analysis.We proposed a random feature method for analyzing time-series data by constructing a sparse approximation to the...Signal decomposition and multiscale signal analysis provide many useful tools for timefrequency analysis.We proposed a random feature method for analyzing time-series data by constructing a sparse approximation to the spectrogram.The randomization is both in the time window locations and the frequency sampling,which lowers the overall sampling and computational cost.The sparsification of the spectrogram leads to a sharp separation between time-frequency clusters which makes it easier to identify intrinsic modes,and thus leads to a new data-driven mode decomposition.The applications include signal representation,outlier removal,and mode decomposition.On benchmark tests,we show that our approach outperforms other state-of-the-art decomposition methods.展开更多
Designing a sparse array with reduced transmit/receive modules(TRMs)is vital for some applications where the antenna system’s size,weight,allowed operating space,and cost are limited.Sparse arrays exhibit distinct ar...Designing a sparse array with reduced transmit/receive modules(TRMs)is vital for some applications where the antenna system’s size,weight,allowed operating space,and cost are limited.Sparse arrays exhibit distinct architectures,roughly classified into three categories:Thinned arrays,nonuniformly spaced arrays,and clustered arrays.While numerous advanced synthesis methods have been presented for the three types of sparse arrays in recent years,a comprehensive review of the latest development in sparse array synthesis is lacking.This work aims to fill this gap by thoroughly summarizing these techniques.The study includes synthesis examples to facilitate a comparative analysis of different techniques in terms of both accuracy and efficiency.Thus,this review is intended to assist researchers and engineers in related fields,offering a clear understanding of the development and distinctions among sparse array synthesis techniques.展开更多
In practice,simultaneous impact localization and time history reconstruction can hardly be achieved,due to the illposed and under-determined problems induced by the constrained and harsh measuring conditions.Although ...In practice,simultaneous impact localization and time history reconstruction can hardly be achieved,due to the illposed and under-determined problems induced by the constrained and harsh measuring conditions.Although l_(1) regularization can be used to obtain sparse solutions,it tends to underestimate solution amplitudes as a biased estimator.To address this issue,a novel impact force identification method with l_(p) regularization is proposed in this paper,using the alternating direction method of multipliers(ADMM).By decomposing the complex primal problem into sub-problems solvable in parallel via proximal operators,ADMM can address the challenge effectively.To mitigate the sensitivity to regularization parameters,an adaptive regularization parameter is derived based on the K-sparsity strategy.Then,an ADMM-based sparse regularization method is developed,which is capable of handling l_(p) regularization with arbitrary p values using adaptively-updated parameters.The effectiveness and performance of the proposed method are validated on an aircraft skin-like composite structure.Additionally,an investigation into the optimal p value for achieving high-accuracy solutions via l_(p) regularization is conducted.It turns out that l_(0.6)regularization consistently yields sparser and more accurate solutions for impact force identification compared to the classic l_(1) regularization method.The impact force identification method proposed in this paper can simultaneously reconstruct impact time history with high accuracy and accurately localize the impact using an under-determined sensor configuration.展开更多
To address the seismic face stability challenges encountered in urban and subsea tunnel construction,an efficient probabilistic analysis framework for shield tunnel faces under seismic conditions is proposed.Based on ...To address the seismic face stability challenges encountered in urban and subsea tunnel construction,an efficient probabilistic analysis framework for shield tunnel faces under seismic conditions is proposed.Based on the upper-bound theory of limit analysis,an improved three-dimensional discrete deterministic mechanism,accounting for the heterogeneous nature of soil media,is formulated to evaluate seismic face stability.The metamodel of failure probabilistic assessments for seismic tunnel faces is constructed by integrating the sparse polynomial chaos expansion method(SPCE)with the modified pseudo-dynamic approach(MPD).The improved deterministic model is validated by comparing with published literature and numerical simulations results,and the SPCE-MPD metamodel is examined with the traditional MCS method.Based on the SPCE-MPD metamodels,the seismic effects on face failure probability and reliability index are presented and the global sensitivity analysis(GSA)is involved to reflect the influence order of seismic action parameters.Finally,the proposed approach is tested to be effective by a engineering case of the Chengdu outer ring tunnel.The results show that higher uncertainty of seismic response on face stability should be noticed in areas with intense earthquakes and variation of seismic wave velocity has the most profound influence on tunnel face stability.展开更多
This study introduces a pre-orthogonal adaptive Fourier decomposition(POAFD)to obtain approximations and numerical solutions to the fractional Laplacian initial value problem and the extension problem of Caffarelli an...This study introduces a pre-orthogonal adaptive Fourier decomposition(POAFD)to obtain approximations and numerical solutions to the fractional Laplacian initial value problem and the extension problem of Caffarelli and Silvestre(generalized Poisson equation).As a first step,the method expands the initial data function into a sparse series of the fundamental solutions with fast convergence,and,as a second step,makes use of the semigroup or the reproducing kernel property of each of the expanding entries.Experiments show the effectiveness and efficiency of the proposed series solutions.展开更多
This paper reviews the adaptive sparse grid discontinuous Galerkin(aSG-DG)method for computing high dimensional partial differential equations(PDEs)and its software implementation.The C++software package called AdaM-D...This paper reviews the adaptive sparse grid discontinuous Galerkin(aSG-DG)method for computing high dimensional partial differential equations(PDEs)and its software implementation.The C++software package called AdaM-DG,implementing the aSG-DG method,is available on GitHub at https://github.com/JuntaoHuang/adaptive-multiresolution-DG.The package is capable of treating a large class of high dimensional linear and nonlinear PDEs.We review the essential components of the algorithm and the functionality of the software,including the multiwavelets used,assembling of bilinear operators,fast matrix-vector product for data with hierarchical structures.We further demonstrate the performance of the package by reporting the numerical error and the CPU cost for several benchmark tests,including linear transport equations,wave equations,and Hamilton-Jacobi(HJ)equations.展开更多
This paper addresses the problem of complex and challenging disturbance localization in the current power system operation environment by proposing a disturbance localization method for power systems based on group sp...This paper addresses the problem of complex and challenging disturbance localization in the current power system operation environment by proposing a disturbance localization method for power systems based on group sparse representation and entropy weight method.Three different electrical quantities are selected as observations in the compressed sensing algorithm.The entropy weighting method is employed to calculate the weights of different observations based on their relative disturbance levels.Subsequently,by leveraging the topological information of the power system and pre-designing an overcomplete dictionary of disturbances based on the corresponding system parameter variations caused by disturbances,an improved Joint Generalized Orthogonal Matching Pursuit(J-GOMP)algorithm is utilized for reconstruction.The reconstructed sparse vectors are divided into three parts.If at least two parts have consistent node identifiers,the node is identified as the disturbance node.If the node identifiers in all three parts are inconsistent,further analysis is conducted considering the weights to determine the disturbance node.Simulation results based on the IEEE 39-bus system model demonstrate that the proposed method,utilizing electrical quantity information from only 8 measurement points,effectively locates disturbance positions and is applicable to various disturbance types with strong noise resistance.展开更多
In order to extract the richer feature information of ship targets from sea clutter, and address the high dimensional data problem, a method termed as multi-scale fusion kernel sparse preserving projection(MSFKSPP) ba...In order to extract the richer feature information of ship targets from sea clutter, and address the high dimensional data problem, a method termed as multi-scale fusion kernel sparse preserving projection(MSFKSPP) based on the maximum margin criterion(MMC) is proposed for recognizing the class of ship targets utilizing the high-resolution range profile(HRRP). Multi-scale fusion is introduced to capture the local and detailed information in small-scale features, and the global and contour information in large-scale features, offering help to extract the edge information from sea clutter and further improving the target recognition accuracy. The proposed method can maximally preserve the multi-scale fusion sparse of data and maximize the class separability in the reduced dimensionality by reproducing kernel Hilbert space. Experimental results on the measured radar data show that the proposed method can effectively extract the features of ship target from sea clutter, further reduce the feature dimensionality, and improve target recognition performance.展开更多
Multi-view Subspace Clustering (MVSC) emerges as an advanced clustering method, designed to integrate diverse views to uncover a common subspace, enhancing the accuracy and robustness of clustering results. The signif...Multi-view Subspace Clustering (MVSC) emerges as an advanced clustering method, designed to integrate diverse views to uncover a common subspace, enhancing the accuracy and robustness of clustering results. The significance of low-rank prior in MVSC is emphasized, highlighting its role in capturing the global data structure across views for improved performance. However, it faces challenges with outlier sensitivity due to its reliance on the Frobenius norm for error measurement. Addressing this, our paper proposes a Low-Rank Multi-view Subspace Clustering Based on Sparse Regularization (LMVSC- Sparse) approach. Sparse regularization helps in selecting the most relevant features or views for clustering while ignoring irrelevant or noisy ones. This leads to a more efficient and effective representation of the data, improving the clustering accuracy and robustness, especially in the presence of outliers or noisy data. By incorporating sparse regularization, LMVSC-Sparse can effectively handle outlier sensitivity, which is a common challenge in traditional MVSC methods relying solely on low-rank priors. Then Alternating Direction Method of Multipliers (ADMM) algorithm is employed to solve the proposed optimization problems. Our comprehensive experiments demonstrate the efficiency and effectiveness of LMVSC-Sparse, offering a robust alternative to traditional MVSC methods.展开更多
Principal Component Analysis (PCA) is a widely used technique for data analysis and dimensionality reduction, but its sensitivity to feature scale and outliers limits its applicability. Robust Principal Component Anal...Principal Component Analysis (PCA) is a widely used technique for data analysis and dimensionality reduction, but its sensitivity to feature scale and outliers limits its applicability. Robust Principal Component Analysis (RPCA) addresses these limitations by decomposing data into a low-rank matrix capturing the underlying structure and a sparse matrix identifying outliers, enhancing robustness against noise and outliers. This paper introduces a novel RPCA variant, Robust PCA Integrating Sparse and Low-rank Priors (RPCA-SL). Each prior targets a specific aspect of the data’s underlying structure and their combination allows for a more nuanced and accurate separation of the main data components from outliers and noise. Then RPCA-SL is solved by employing a proximal gradient algorithm for improved anomaly detection and data decomposition. Experimental results on simulation and real data demonstrate significant advancements.展开更多
Fixed-point fast sweeping methods are a class of explicit iterative methods developed in the literature to efficiently solve steady-state solutions of hyperbolic partial differential equations(PDEs).As other types of ...Fixed-point fast sweeping methods are a class of explicit iterative methods developed in the literature to efficiently solve steady-state solutions of hyperbolic partial differential equations(PDEs).As other types of fast sweeping schemes,fixed-point fast sweeping methods use the Gauss-Seidel iterations and alternating sweeping strategy to cover characteristics of hyperbolic PDEs in a certain direction simultaneously in each sweeping order.The resulting iterative schemes have a fast convergence rate to steady-state solutions.Moreover,an advantage of fixed-point fast sweeping methods over other types of fast sweeping methods is that they are explicit and do not involve the inverse operation of any nonlinear local system.Hence,they are robust and flexible,and have been combined with high-order accurate weighted essentially non-oscillatory(WENO)schemes to solve various hyperbolic PDEs in the literature.For multidimensional nonlinear problems,high-order fixed-point fast sweeping WENO methods still require quite a large amount of computational costs.In this technical note,we apply sparse-grid techniques,an effective approximation tool for multidimensional problems,to fixed-point fast sweeping WENO methods for reducing their computational costs.Here,we focus on fixed-point fast sweeping WENO schemes with third-order accuracy(Zhang et al.2006[41]),for solving Eikonal equations,an important class of static Hamilton-Jacobi(H-J)equations.Numerical experiments on solving multidimensional Eikonal equations and a more general static H-J equation are performed to show that the sparse-grid computations of the fixed-point fast sweeping WENO schemes achieve large savings of CPU times on refined meshes,and at the same time maintain comparable accuracy and resolution with those on corresponding regular single grids.展开更多
双重稀疏结构的线性回归模型是一种描述解释变量组间和组内同时具有稀疏性的统计模型,我们常用Sparse Group Lasso对此模型进行变量选择.然而在很多应用中,解释变量很难做到精确测量,从而我们在应用Sparse Group Lasso方法时需要考虑测...双重稀疏结构的线性回归模型是一种描述解释变量组间和组内同时具有稀疏性的统计模型,我们常用Sparse Group Lasso对此模型进行变量选择.然而在很多应用中,解释变量很难做到精确测量,从而我们在应用Sparse Group Lasso方法时需要考虑测量误差的影响.针对这一问题,本文提出了一种具有双重稀疏结构的线性测量误差回归模型的Sparse Group Lasso变量选择方法(MESGL).该方法先利用半正定投影算子对观测数据的误差进行修正,然后借助ADMM算法对修正后的数据进行恢复,最后利用Sparse Group Lasso方法进行变量选择和参数估计.在一些正则条件下,我们建立了参数估计量的非渐近Oracle不等式,并且通过随机模拟分析验证了MESGL方法在变量选择和参数估计上取得的良好效果.展开更多
基金supported by the Scientific Research Project of Xiang Jiang Lab(22XJ02003)the University Fundamental Research Fund(23-ZZCX-JDZ-28)+5 种基金the National Science Fund for Outstanding Young Scholars(62122093)the National Natural Science Foundation of China(72071205)the Hunan Graduate Research Innovation Project(ZC23112101-10)the Hunan Natural Science Foundation Regional Joint Project(2023JJ50490)the Science and Technology Project for Young and Middle-aged Talents of Hunan(2023TJ-Z03)the Science and Technology Innovation Program of Humnan Province(2023RC1002)。
文摘Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.As a result,many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately.Nevertheless,existing optimizers often focus on locating non-zero variable posi-tions to optimize the binary variables Mask.However,approxi-mating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized.In data mining,it is common to mine frequent itemsets appear-ing together in a dataset to reveal the correlation between data.Inspired by this,we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets(TELSO)to address these SLMOPs.TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence.Experi-mental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms(SLMOEAs)in terms of performance and convergence speed.
基金support by the Open Project of Xiangjiang Laboratory(22XJ02003)the University Fundamental Research Fund(23-ZZCX-JDZ-28,ZK21-07)+5 种基金the National Science Fund for Outstanding Young Scholars(62122093)the National Natural Science Foundation of China(72071205)the Hunan Graduate Research Innovation Project(CX20230074)the Hunan Natural Science Foundation Regional Joint Project(2023JJ50490)the Science and Technology Project for Young and Middle-aged Talents of Hunan(2023TJZ03)the Science and Technology Innovation Program of Humnan Province(2023RC1002).
文摘Sparse large-scale multi-objective optimization problems(SLMOPs)are common in science and engineering.However,the large-scale problem represents the high dimensionality of the decision space,requiring algorithms to traverse vast expanse with limited computational resources.Furthermore,in the context of sparse,most variables in Pareto optimal solutions are zero,making it difficult for algorithms to identify non-zero variables efficiently.This paper is dedicated to addressing the challenges posed by SLMOPs.To start,we introduce innovative objective functions customized to mine maximum and minimum candidate sets.This substantial enhancement dramatically improves the efficacy of frequent pattern mining.In this way,selecting candidate sets is no longer based on the quantity of nonzero variables they contain but on a higher proportion of nonzero variables within specific dimensions.Additionally,we unveil a novel approach to association rule mining,which delves into the intricate relationships between non-zero variables.This novel methodology aids in identifying sparse distributions that can potentially expedite reductions in the objective function value.We extensively tested our algorithm across eight benchmark problems and four real-world SLMOPs.The results demonstrate that our approach achieves competitive solutions across various challenges.
基金supported by Natural Science Foundation of China(62071262)the K.C.Wong Magna Fund at Ningbo University.
文摘In this paper,we reconstruct strongly-decaying block sparse signals by the block generalized orthogonal matching pursuit(BgOMP)algorithm in the l2-bounded noise case.Under some restraints on the minimum magnitude of the nonzero elements of the strongly-decaying block sparse signal,if the sensing matrix satisfies the the block restricted isometry property(block-RIP),then arbitrary strongly-decaying block sparse signals can be accurately and steadily reconstructed by the BgOMP algorithm in iterations.Furthermore,we conjecture that this condition is sharp.
基金supported in part by NUS startup grantthe National Natural Science Foundation of China (52076037)。
文摘Although many multi-view clustering(MVC) algorithms with acceptable performances have been presented, to the best of our knowledge, nearly all of them need to be fed with the correct number of clusters. In addition, these existing algorithms create only the hard and fuzzy partitions for multi-view objects,which are often located in highly-overlapping areas of multi-view feature space. The adoption of hard and fuzzy partition ignores the ambiguity and uncertainty in the assignment of objects, likely leading to performance degradation. To address these issues, we propose a novel sparse reconstructive multi-view evidential clustering algorithm(SRMVEC). Based on a sparse reconstructive procedure, SRMVEC learns a shared affinity matrix across views, and maps multi-view objects to a 2-dimensional humanreadable chart by calculating 2 newly defined mathematical metrics for each object. From this chart, users can detect the number of clusters and select several objects existing in the dataset as cluster centers. Then, SRMVEC derives a credal partition under the framework of evidence theory, improving the fault tolerance of clustering. Ablation studies show the benefits of adopting the sparse reconstructive procedure and evidence theory. Besides,SRMVEC delivers effectiveness on benchmark datasets by outperforming some state-of-the-art methods.
基金the National Key Research and Devecopment Program of China(No.2022YFB4501601)the National Natural Science Foundation of China(No.62102398,U20A20227,62222214,62002338,U22A2028,U19B2019)+1 种基金the Chinese Academy of Sciences Project for Young Scientists in Basic Research(YSBR-029)Youth Innovation Promotion Association Chinese Academy of Sciences。
文摘Quantized training has been proven to be a prominent method to achieve deep neural network training under limited computational resources.It uses low bit-width arithmetics with a proper scaling factor to achieve negligible accuracy loss.Cambricon-Q is the ASIC design proposed to efficiently support quantized training,and achieves significant performance improvement.However,there are still two caveats in the design.First,Cambricon-Q with different hardware specifications may lead to different numerical errors,resulting in non-reproducible behaviors which may become a major concern in critical applications.Second,Cambricon-Q cannot leverage data sparsity,where considerable cycles could still be squeezed out.To address the caveats,the acceleration core of Cambricon-Q is redesigned to support fine-grained irregular data processing.The new design not only enables acceleration on sparse data,but also enables performing local dynamic quantization by contiguous value ranges(which is hardware independent),instead of contiguous addresses(which is dependent on hardware factors).Experimental results show that the accuracy loss of the method still keeps negligible,and the accelerator achieves 1.61×performance improvement over Cambricon-Q,with about 10%energy increase.
文摘Passive detection of low-slow-small(LSS)targets is easily interfered by direct signal and multipath clutter,and the traditional clutter suppression method has the contradiction between step size and convergence rate.In this paper,a frequency domain clutter suppression algorithm based on sparse adaptive filtering is proposed.The pulse compression operation between the error signal and the input reference signal is added to the cost function as a sparsity constraint,and the criterion for filter weight updating is improved to obtain a purer echo signal.At the same time,the step size and penalty factor are brought into the adaptive iteration process,and the input data is used to drive the adaptive changes of parameters such as step size.The proposed algorithm has a small amount of calculation,which improves the robustness to parameters such as step size,reduces the weight error of the filter and has a good clutter suppression performance.
基金supported by National Key Research and Development Program of China(2020YFB0505803)National Key Research and Development Program of China(2016YFB0501700)。
文摘The proportionate recursive least squares(PRLS)algorithm has shown faster convergence and better performance than both proportionate updating(PU)mechanism based least mean squares(LMS)algorithms and RLS algorithms with a sparse regularization term.In this paper,we propose a variable forgetting factor(VFF)PRLS algorithm with a sparse penalty,e.g.,l_(1)-norm,for sparse identification.To reduce the computation complexity of the proposed algorithm,a fast implementation method based on dichotomous coordinate descent(DCD)algorithm is also derived.Simulation results indicate superior performance of the proposed algorithm.
基金supported in part by the NSERC RGPIN 50503-10842supported in part by the AFOSR MURI FA9550-21-1-0084the NSF DMS-1752116.
文摘Signal decomposition and multiscale signal analysis provide many useful tools for timefrequency analysis.We proposed a random feature method for analyzing time-series data by constructing a sparse approximation to the spectrogram.The randomization is both in the time window locations and the frequency sampling,which lowers the overall sampling and computational cost.The sparsification of the spectrogram leads to a sharp separation between time-frequency clusters which makes it easier to identify intrinsic modes,and thus leads to a new data-driven mode decomposition.The applications include signal representation,outlier removal,and mode decomposition.On benchmark tests,we show that our approach outperforms other state-of-the-art decomposition methods.
基金supported by the National Natural Science Foundation of China under Grant No.U2341208.
文摘Designing a sparse array with reduced transmit/receive modules(TRMs)is vital for some applications where the antenna system’s size,weight,allowed operating space,and cost are limited.Sparse arrays exhibit distinct architectures,roughly classified into three categories:Thinned arrays,nonuniformly spaced arrays,and clustered arrays.While numerous advanced synthesis methods have been presented for the three types of sparse arrays in recent years,a comprehensive review of the latest development in sparse array synthesis is lacking.This work aims to fill this gap by thoroughly summarizing these techniques.The study includes synthesis examples to facilitate a comparative analysis of different techniques in terms of both accuracy and efficiency.Thus,this review is intended to assist researchers and engineers in related fields,offering a clear understanding of the development and distinctions among sparse array synthesis techniques.
基金Supported by National Natural Science Foundation of China (Grant Nos.52305127,52075414)China Postdoctoral Science Foundation (Grant No.2021M702595)。
文摘In practice,simultaneous impact localization and time history reconstruction can hardly be achieved,due to the illposed and under-determined problems induced by the constrained and harsh measuring conditions.Although l_(1) regularization can be used to obtain sparse solutions,it tends to underestimate solution amplitudes as a biased estimator.To address this issue,a novel impact force identification method with l_(p) regularization is proposed in this paper,using the alternating direction method of multipliers(ADMM).By decomposing the complex primal problem into sub-problems solvable in parallel via proximal operators,ADMM can address the challenge effectively.To mitigate the sensitivity to regularization parameters,an adaptive regularization parameter is derived based on the K-sparsity strategy.Then,an ADMM-based sparse regularization method is developed,which is capable of handling l_(p) regularization with arbitrary p values using adaptively-updated parameters.The effectiveness and performance of the proposed method are validated on an aircraft skin-like composite structure.Additionally,an investigation into the optimal p value for achieving high-accuracy solutions via l_(p) regularization is conducted.It turns out that l_(0.6)regularization consistently yields sparser and more accurate solutions for impact force identification compared to the classic l_(1) regularization method.The impact force identification method proposed in this paper can simultaneously reconstruct impact time history with high accuracy and accurately localize the impact using an under-determined sensor configuration.
基金Project([2018]3010)supported by the Guizhou Provincial Science and Technology Major Project,China。
文摘To address the seismic face stability challenges encountered in urban and subsea tunnel construction,an efficient probabilistic analysis framework for shield tunnel faces under seismic conditions is proposed.Based on the upper-bound theory of limit analysis,an improved three-dimensional discrete deterministic mechanism,accounting for the heterogeneous nature of soil media,is formulated to evaluate seismic face stability.The metamodel of failure probabilistic assessments for seismic tunnel faces is constructed by integrating the sparse polynomial chaos expansion method(SPCE)with the modified pseudo-dynamic approach(MPD).The improved deterministic model is validated by comparing with published literature and numerical simulations results,and the SPCE-MPD metamodel is examined with the traditional MCS method.Based on the SPCE-MPD metamodels,the seismic effects on face failure probability and reliability index are presented and the global sensitivity analysis(GSA)is involved to reflect the influence order of seismic action parameters.Finally,the proposed approach is tested to be effective by a engineering case of the Chengdu outer ring tunnel.The results show that higher uncertainty of seismic response on face stability should be noticed in areas with intense earthquakes and variation of seismic wave velocity has the most profound influence on tunnel face stability.
基金supported by the Science and Technology Development Fund of Macao SAR(FDCT0128/2022/A,0020/2023/RIB1,0111/2023/AFJ,005/2022/ALC)the Shandong Natural Science Foundation of China(ZR2020MA004)+2 种基金the National Natural Science Foundation of China(12071272)the MYRG 2018-00168-FSTZhejiang Provincial Natural Science Foundation of China(LQ23A010014).
文摘This study introduces a pre-orthogonal adaptive Fourier decomposition(POAFD)to obtain approximations and numerical solutions to the fractional Laplacian initial value problem and the extension problem of Caffarelli and Silvestre(generalized Poisson equation).As a first step,the method expands the initial data function into a sparse series of the fundamental solutions with fast convergence,and,as a second step,makes use of the semigroup or the reproducing kernel property of each of the expanding entries.Experiments show the effectiveness and efficiency of the proposed series solutions.
基金supported by the NSF grant DMS-2111383Air Force Office of Scientific Research FA9550-18-1-0257the NSF grant DMS-2011838.
文摘This paper reviews the adaptive sparse grid discontinuous Galerkin(aSG-DG)method for computing high dimensional partial differential equations(PDEs)and its software implementation.The C++software package called AdaM-DG,implementing the aSG-DG method,is available on GitHub at https://github.com/JuntaoHuang/adaptive-multiresolution-DG.The package is capable of treating a large class of high dimensional linear and nonlinear PDEs.We review the essential components of the algorithm and the functionality of the software,including the multiwavelets used,assembling of bilinear operators,fast matrix-vector product for data with hierarchical structures.We further demonstrate the performance of the package by reporting the numerical error and the CPU cost for several benchmark tests,including linear transport equations,wave equations,and Hamilton-Jacobi(HJ)equations.
基金funded by the State Grid Jilin Economic Research Institute’s 2022 Practical Re-Search Project on the Construction of Long-Term Power Supply Guarantee Mechanism in Provincial Capital Cities under the New Situation,Grant Number SGJLJY00GPJS2200041.
文摘This paper addresses the problem of complex and challenging disturbance localization in the current power system operation environment by proposing a disturbance localization method for power systems based on group sparse representation and entropy weight method.Three different electrical quantities are selected as observations in the compressed sensing algorithm.The entropy weighting method is employed to calculate the weights of different observations based on their relative disturbance levels.Subsequently,by leveraging the topological information of the power system and pre-designing an overcomplete dictionary of disturbances based on the corresponding system parameter variations caused by disturbances,an improved Joint Generalized Orthogonal Matching Pursuit(J-GOMP)algorithm is utilized for reconstruction.The reconstructed sparse vectors are divided into three parts.If at least two parts have consistent node identifiers,the node is identified as the disturbance node.If the node identifiers in all three parts are inconsistent,further analysis is conducted considering the weights to determine the disturbance node.Simulation results based on the IEEE 39-bus system model demonstrate that the proposed method,utilizing electrical quantity information from only 8 measurement points,effectively locates disturbance positions and is applicable to various disturbance types with strong noise resistance.
基金supported by the National Natural Science Foundation of China (62271255,61871218)the Fundamental Research Funds for the Central University (3082019NC2019002)+1 种基金the Aeronautical Science Foundation (ASFC-201920007002)the Program of Remote Sensing Intelligent Monitoring and Emergency Services for Regional Security Elements。
文摘In order to extract the richer feature information of ship targets from sea clutter, and address the high dimensional data problem, a method termed as multi-scale fusion kernel sparse preserving projection(MSFKSPP) based on the maximum margin criterion(MMC) is proposed for recognizing the class of ship targets utilizing the high-resolution range profile(HRRP). Multi-scale fusion is introduced to capture the local and detailed information in small-scale features, and the global and contour information in large-scale features, offering help to extract the edge information from sea clutter and further improving the target recognition accuracy. The proposed method can maximally preserve the multi-scale fusion sparse of data and maximize the class separability in the reduced dimensionality by reproducing kernel Hilbert space. Experimental results on the measured radar data show that the proposed method can effectively extract the features of ship target from sea clutter, further reduce the feature dimensionality, and improve target recognition performance.
文摘Multi-view Subspace Clustering (MVSC) emerges as an advanced clustering method, designed to integrate diverse views to uncover a common subspace, enhancing the accuracy and robustness of clustering results. The significance of low-rank prior in MVSC is emphasized, highlighting its role in capturing the global data structure across views for improved performance. However, it faces challenges with outlier sensitivity due to its reliance on the Frobenius norm for error measurement. Addressing this, our paper proposes a Low-Rank Multi-view Subspace Clustering Based on Sparse Regularization (LMVSC- Sparse) approach. Sparse regularization helps in selecting the most relevant features or views for clustering while ignoring irrelevant or noisy ones. This leads to a more efficient and effective representation of the data, improving the clustering accuracy and robustness, especially in the presence of outliers or noisy data. By incorporating sparse regularization, LMVSC-Sparse can effectively handle outlier sensitivity, which is a common challenge in traditional MVSC methods relying solely on low-rank priors. Then Alternating Direction Method of Multipliers (ADMM) algorithm is employed to solve the proposed optimization problems. Our comprehensive experiments demonstrate the efficiency and effectiveness of LMVSC-Sparse, offering a robust alternative to traditional MVSC methods.
文摘Principal Component Analysis (PCA) is a widely used technique for data analysis and dimensionality reduction, but its sensitivity to feature scale and outliers limits its applicability. Robust Principal Component Analysis (RPCA) addresses these limitations by decomposing data into a low-rank matrix capturing the underlying structure and a sparse matrix identifying outliers, enhancing robustness against noise and outliers. This paper introduces a novel RPCA variant, Robust PCA Integrating Sparse and Low-rank Priors (RPCA-SL). Each prior targets a specific aspect of the data’s underlying structure and their combination allows for a more nuanced and accurate separation of the main data components from outliers and noise. Then RPCA-SL is solved by employing a proximal gradient algorithm for improved anomaly detection and data decomposition. Experimental results on simulation and real data demonstrate significant advancements.
文摘Fixed-point fast sweeping methods are a class of explicit iterative methods developed in the literature to efficiently solve steady-state solutions of hyperbolic partial differential equations(PDEs).As other types of fast sweeping schemes,fixed-point fast sweeping methods use the Gauss-Seidel iterations and alternating sweeping strategy to cover characteristics of hyperbolic PDEs in a certain direction simultaneously in each sweeping order.The resulting iterative schemes have a fast convergence rate to steady-state solutions.Moreover,an advantage of fixed-point fast sweeping methods over other types of fast sweeping methods is that they are explicit and do not involve the inverse operation of any nonlinear local system.Hence,they are robust and flexible,and have been combined with high-order accurate weighted essentially non-oscillatory(WENO)schemes to solve various hyperbolic PDEs in the literature.For multidimensional nonlinear problems,high-order fixed-point fast sweeping WENO methods still require quite a large amount of computational costs.In this technical note,we apply sparse-grid techniques,an effective approximation tool for multidimensional problems,to fixed-point fast sweeping WENO methods for reducing their computational costs.Here,we focus on fixed-point fast sweeping WENO schemes with third-order accuracy(Zhang et al.2006[41]),for solving Eikonal equations,an important class of static Hamilton-Jacobi(H-J)equations.Numerical experiments on solving multidimensional Eikonal equations and a more general static H-J equation are performed to show that the sparse-grid computations of the fixed-point fast sweeping WENO schemes achieve large savings of CPU times on refined meshes,and at the same time maintain comparable accuracy and resolution with those on corresponding regular single grids.
文摘双重稀疏结构的线性回归模型是一种描述解释变量组间和组内同时具有稀疏性的统计模型,我们常用Sparse Group Lasso对此模型进行变量选择.然而在很多应用中,解释变量很难做到精确测量,从而我们在应用Sparse Group Lasso方法时需要考虑测量误差的影响.针对这一问题,本文提出了一种具有双重稀疏结构的线性测量误差回归模型的Sparse Group Lasso变量选择方法(MESGL).该方法先利用半正定投影算子对观测数据的误差进行修正,然后借助ADMM算法对修正后的数据进行恢复,最后利用Sparse Group Lasso方法进行变量选择和参数估计.在一些正则条件下,我们建立了参数估计量的非渐近Oracle不等式,并且通过随机模拟分析验证了MESGL方法在变量选择和参数估计上取得的良好效果.