High-precision and real-time diagnosis of sucker rod pumping system(SRPS)is important for quickly mastering oil well operations.Deep learning-based method for classifying the dynamometer card(DC)of oil wells is an eff...High-precision and real-time diagnosis of sucker rod pumping system(SRPS)is important for quickly mastering oil well operations.Deep learning-based method for classifying the dynamometer card(DC)of oil wells is an efficient diagnosis method.However,the input of the DC as a two-dimensional image into the deep learning framework suffers from low feature utilization and high computational effort.Additionally,different SRPSs in an oil field have various system parameters,and the same SRPS generates different DCs at different moments.Thus,there is heterogeneity in field data,which can dramatically impair the diagnostic accuracy.To solve the above problems,a working condition recognition method based on 4-segment time-frequency signature matrix(4S-TFSM)and deep learning is presented in this paper.First,the 4-segment time-frequency signature(4S-TFS)method that can reduce the computing power requirements is proposed for feature extraction of DC data.Subsequently,the 4S-TFSM is constructed by relative normalization and matrix calculation to synthesize the features of multiple data and solve the problem of data heterogeneity.Finally,a convolutional neural network(CNN),one of the deep learning frameworks,is used to determine the functioning conditions based on the 4S-TFSM.Experiments on field data verify that the proposed diagnostic method based on 4S-TFSM and CNN(4S-TFSM-CNN)can significantly improve the accuracy of working condition recognition with lower computational cost.To the best of our knowledge,this is the first work to discuss the effect of data heterogeneity on the working condition recognition performance of SRPS.展开更多
The conventional linear time-frequency analysis method cannot achieve high resolution and energy focusing in the time and frequency dimensions at the same time,especially in the low frequency region.In order to improv...The conventional linear time-frequency analysis method cannot achieve high resolution and energy focusing in the time and frequency dimensions at the same time,especially in the low frequency region.In order to improve the resolution of the linear time-frequency analysis method in the low-frequency region,we have proposed a W transform method,in which the instantaneous frequency is introduced as a parameter into the linear transformation,and the analysis time window is constructed which matches the instantaneous frequency of the seismic data.In this paper,the W transform method is compared with the Wigner-Ville distribution(WVD),a typical nonlinear time-frequency analysis method.The WVD method that shows the energy distribution in the time-frequency domain clearly indicates the gravitational center of time and the gravitational center of frequency of a wavelet,while the time-frequency spectrum of the W transform also has a clear gravitational center of energy focusing,because the instantaneous frequency corresponding to any time position is introduced as the transformation parameter.Therefore,the W transform can be benchmarked directly by the WVD method.We summarize the development of the W transform and three improved methods in recent years,and elaborate on the evolution of the standard W transform,the chirp-modulated W transform,the fractional-order W transform,and the linear canonical W transform.Through three application examples of W transform in fluvial sand body identification and reservoir prediction,it is verified that W transform can improve the resolution and energy focusing of time-frequency spectra.展开更多
The Gabor and S transforms are frequently used in time-frequency decomposition methods. Constrained by the uncertainty principle, both transforms produce low-resolution time-frequency decomposition results in the time...The Gabor and S transforms are frequently used in time-frequency decomposition methods. Constrained by the uncertainty principle, both transforms produce low-resolution time-frequency decomposition results in the time and frequency domains. To improve the resolution of the time-frequency decomposition results, we use the instantaneous frequency distribution function(IFDF) to express the seismic signal. When the instantaneous frequencies of the nonstationary signal satisfy the requirements of the uncertainty principle, the support of IFDF is just the support of the amplitude ridges in the signal obtained using the short-time Fourier transform. Based on this feature, we propose a new iteration algorithm to achieve the sparse time-frequency decomposition of the signal. The iteration algorithm uses the support of the amplitude ridges of the residual signal obtained with the short-time Fourier transform to update the time-frequency components of the signal. The summation of the updated time-frequency components in each iteration is the result of the sparse timefrequency decomposition. Numerical examples show that the proposed method improves the resolution of the time-frequency decomposition results and the accuracy of the analysis of the nonstationary signal. We also use the proposed method to attenuate the ground roll of field seismic data with good results.展开更多
This paper deals with the blind separation of nonstation-ary sources and direction-of-arrival (DOA) estimation in the under-determined case, when there are more sources than sensors. We assume the sources to be time...This paper deals with the blind separation of nonstation-ary sources and direction-of-arrival (DOA) estimation in the under-determined case, when there are more sources than sensors. We assume the sources to be time-frequency (TF) disjoint to a certain extent. In particular, the number of sources presented at any TF neighborhood is strictly less than that of sensors. We can identify the real number of active sources and achieve separation in any TF neighborhood by the sparse representation method. Compared with the subspace-based algorithm under the same sparseness assumption, which suffers from the extra noise effect since it can-not estimate the true number of active sources, the proposed algorithm can estimate the number of active sources and their cor-responding TF values in any TF neighborhood simultaneously. An-other contribution of this paper is a new estimation procedure for the DOA of sources in the underdetermined case, which combines the TF sparseness of sources and the clustering technique. Sim-ulation results demonstrate the validity and high performance of the proposed algorithm in both blind source separation (BSS) and DOA estimation.展开更多
This paper proposes a new method for estimating the parameter of maneuvering targets based on sparse time-frequency transform in over-the-horizon radar(OTHR). In this method, the sparse time-frequency distribution o...This paper proposes a new method for estimating the parameter of maneuvering targets based on sparse time-frequency transform in over-the-horizon radar(OTHR). In this method, the sparse time-frequency distribution of the radar echo is obtained by solving a sparse optimization problem based on the short-time Fourier transform. Then Hough transform is employed to estimate the parameter of the targets. The proposed algorithm has the following advantages: Compared with the Wigner-Hough transform method, the computational complexity of the sparse optimization is low due to the application of fast Fourier transform(FFT). And the computational cost of Hough transform is also greatly reduced because of the sparsity of the time-frequency distribution. Compared with the high order ambiguity function(HAF) method, the proposed method improves in terms of precision and robustness to noise. Simulation results show that compared with the HAF method, the required SNR and relative mean square error are 8 dB lower and 50 dB lower respectively in the proposed method. While processing the field experiment data, the execution time of Hough transform in the proposed method is only 4% of the Wigner-Hough transform method.展开更多
Modern agricultural mechanization has put forward higher requirements for the intelligent defect diagnosis.However,the fault features are usually learned and classified under all speeds without considering the effects...Modern agricultural mechanization has put forward higher requirements for the intelligent defect diagnosis.However,the fault features are usually learned and classified under all speeds without considering the effects of speed fluctuation.To overcome this deficiency,a novel intelligent defect detection framework based on time-frequency transformation is presented in this work.In the framework,the samples under one speed are employed for training sparse filtering model,and the remaining samples under different speeds are adopted for testing the effectiveness.Our proposed approach contains two stages:1)the time-frequency domain signals are acquired from the mechanical raw vibration data by the short time Fourier transform algorithm,and then the defect features are extracted from time-frequency domain signals by sparse filtering algorithm;2)different defect types are classified by the softmax regression using the defect features.The proposed approach can be employed to mine available fault characteristics adaptively and is an effective intelligent method for fault detection of agricultural equipment.The fault detection performances confirm that our approach not only owns strong ability for fault classification under different speeds,but also obtains higher identification accuracy than the other methods.展开更多
Since leaks in high-pressure pipelines transporting crude oil can cause severe economic losses,a reliable leak risk assessment can assist in developing an effective pipeline maintenance plan and avoiding unexpected in...Since leaks in high-pressure pipelines transporting crude oil can cause severe economic losses,a reliable leak risk assessment can assist in developing an effective pipeline maintenance plan and avoiding unexpected incidents.The fast and accurate leak detection methods are essential for maintaining pipeline safety in pipeline reliability engineering.Current oil pipeline leakage signals are insufficient for feature extraction,while the training time for traditional leakage prediction models is too long.A new leak detection method is proposed based on time-frequency features and the Genetic Algorithm-Levenberg Marquardt(GA-LM)classification model for predicting the leakage status of oil pipelines.The signal that has been processed is transformed to the time and frequency domain,allowing full expression of the original signal.The traditional Back Propagation(BP)neural network is optimized by the Genetic Algorithm(GA)and Levenberg Marquardt(LM)algorithms.The results show that the recognition effect of a combined feature parameter is superior to that of a single feature parameter.The Accuracy,Precision,Recall,and F1score of the GA-LM model is 95%,93.5%,96.7%,and 95.1%,respectively,which proves that the GA-LM model has a good predictive effect and excellent stability for positive and negative samples.The proposed GA-LM model can obviously reduce training time and improve recognition efficiency.In addition,considering that a large number of samples are required for model training,a wavelet threshold method is proposed to generate sample data with higher reliability.The research results can provide an effective theoretical and technical reference for the leakage risk assessment of the actual oil pipelines.展开更多
The load types in low-voltage distribution systems are diverse.Some loads have current signals that are similar to series fault arcs,making it difficult to effectively detect fault arcs during their occurrence and sus...The load types in low-voltage distribution systems are diverse.Some loads have current signals that are similar to series fault arcs,making it difficult to effectively detect fault arcs during their occurrence and sustained combustion,which can easily lead to serious electrical fire accidents.To address this issue,this paper establishes a fault arc prototype experimental platform,selects multiple commonly used loads for fault arc experiments,and collects data in both normal and fault states.By analyzing waveform characteristics and selecting fault discrimination feature indicators,corresponding feature values are extracted for qualitative analysis to explore changes in timefrequency characteristics of current before and after faults.Multiple features are then selected to form a multidimensional feature vector space to effectively reduce arc misjudgments and construct a fault discrimination feature database.Based on this,a fault arc hazard prediction model is built using random forests.The model’s multiple hyperparameters are simultaneously optimized through grid search,aiming tominimize node information entropy and complete model training,thereby enhancing model robustness and generalization ability.Through experimental verification,the proposed method accurately predicts and classifies fault arcs of different load types,with an average accuracy at least 1%higher than that of the commonly used fault predictionmethods compared in the paper.展开更多
Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero....Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.As a result,many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately.Nevertheless,existing optimizers often focus on locating non-zero variable posi-tions to optimize the binary variables Mask.However,approxi-mating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized.In data mining,it is common to mine frequent itemsets appear-ing together in a dataset to reveal the correlation between data.Inspired by this,we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets(TELSO)to address these SLMOPs.TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence.Experi-mental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms(SLMOEAs)in terms of performance and convergence speed.展开更多
Sparse large-scale multi-objective optimization problems(SLMOPs)are common in science and engineering.However,the large-scale problem represents the high dimensionality of the decision space,requiring algorithms to tr...Sparse large-scale multi-objective optimization problems(SLMOPs)are common in science and engineering.However,the large-scale problem represents the high dimensionality of the decision space,requiring algorithms to traverse vast expanse with limited computational resources.Furthermore,in the context of sparse,most variables in Pareto optimal solutions are zero,making it difficult for algorithms to identify non-zero variables efficiently.This paper is dedicated to addressing the challenges posed by SLMOPs.To start,we introduce innovative objective functions customized to mine maximum and minimum candidate sets.This substantial enhancement dramatically improves the efficacy of frequent pattern mining.In this way,selecting candidate sets is no longer based on the quantity of nonzero variables they contain but on a higher proportion of nonzero variables within specific dimensions.Additionally,we unveil a novel approach to association rule mining,which delves into the intricate relationships between non-zero variables.This novel methodology aids in identifying sparse distributions that can potentially expedite reductions in the objective function value.We extensively tested our algorithm across eight benchmark problems and four real-world SLMOPs.The results demonstrate that our approach achieves competitive solutions across various challenges.展开更多
In this paper,we reconstruct strongly-decaying block sparse signals by the block generalized orthogonal matching pursuit(BgOMP)algorithm in the l2-bounded noise case.Under some restraints on the minimum magnitude of t...In this paper,we reconstruct strongly-decaying block sparse signals by the block generalized orthogonal matching pursuit(BgOMP)algorithm in the l2-bounded noise case.Under some restraints on the minimum magnitude of the nonzero elements of the strongly-decaying block sparse signal,if the sensing matrix satisfies the the block restricted isometry property(block-RIP),then arbitrary strongly-decaying block sparse signals can be accurately and steadily reconstructed by the BgOMP algorithm in iterations.Furthermore,we conjecture that this condition is sharp.展开更多
Although many multi-view clustering(MVC) algorithms with acceptable performances have been presented, to the best of our knowledge, nearly all of them need to be fed with the correct number of clusters. In addition, t...Although many multi-view clustering(MVC) algorithms with acceptable performances have been presented, to the best of our knowledge, nearly all of them need to be fed with the correct number of clusters. In addition, these existing algorithms create only the hard and fuzzy partitions for multi-view objects,which are often located in highly-overlapping areas of multi-view feature space. The adoption of hard and fuzzy partition ignores the ambiguity and uncertainty in the assignment of objects, likely leading to performance degradation. To address these issues, we propose a novel sparse reconstructive multi-view evidential clustering algorithm(SRMVEC). Based on a sparse reconstructive procedure, SRMVEC learns a shared affinity matrix across views, and maps multi-view objects to a 2-dimensional humanreadable chart by calculating 2 newly defined mathematical metrics for each object. From this chart, users can detect the number of clusters and select several objects existing in the dataset as cluster centers. Then, SRMVEC derives a credal partition under the framework of evidence theory, improving the fault tolerance of clustering. Ablation studies show the benefits of adopting the sparse reconstructive procedure and evidence theory. Besides,SRMVEC delivers effectiveness on benchmark datasets by outperforming some state-of-the-art methods.展开更多
Quantized training has been proven to be a prominent method to achieve deep neural network training under limited computational resources.It uses low bit-width arithmetics with a proper scaling factor to achieve negli...Quantized training has been proven to be a prominent method to achieve deep neural network training under limited computational resources.It uses low bit-width arithmetics with a proper scaling factor to achieve negligible accuracy loss.Cambricon-Q is the ASIC design proposed to efficiently support quantized training,and achieves significant performance improvement.However,there are still two caveats in the design.First,Cambricon-Q with different hardware specifications may lead to different numerical errors,resulting in non-reproducible behaviors which may become a major concern in critical applications.Second,Cambricon-Q cannot leverage data sparsity,where considerable cycles could still be squeezed out.To address the caveats,the acceleration core of Cambricon-Q is redesigned to support fine-grained irregular data processing.The new design not only enables acceleration on sparse data,but also enables performing local dynamic quantization by contiguous value ranges(which is hardware independent),instead of contiguous addresses(which is dependent on hardware factors).Experimental results show that the accuracy loss of the method still keeps negligible,and the accelerator achieves 1.61×performance improvement over Cambricon-Q,with about 10%energy increase.展开更多
Passive detection of low-slow-small(LSS)targets is easily interfered by direct signal and multipath clutter,and the traditional clutter suppression method has the contradiction between step size and convergence rate.I...Passive detection of low-slow-small(LSS)targets is easily interfered by direct signal and multipath clutter,and the traditional clutter suppression method has the contradiction between step size and convergence rate.In this paper,a frequency domain clutter suppression algorithm based on sparse adaptive filtering is proposed.The pulse compression operation between the error signal and the input reference signal is added to the cost function as a sparsity constraint,and the criterion for filter weight updating is improved to obtain a purer echo signal.At the same time,the step size and penalty factor are brought into the adaptive iteration process,and the input data is used to drive the adaptive changes of parameters such as step size.The proposed algorithm has a small amount of calculation,which improves the robustness to parameters such as step size,reduces the weight error of the filter and has a good clutter suppression performance.展开更多
Signal decomposition and multiscale signal analysis provide many useful tools for timefrequency analysis.We proposed a random feature method for analyzing time-series data by constructing a sparse approximation to the...Signal decomposition and multiscale signal analysis provide many useful tools for timefrequency analysis.We proposed a random feature method for analyzing time-series data by constructing a sparse approximation to the spectrogram.The randomization is both in the time window locations and the frequency sampling,which lowers the overall sampling and computational cost.The sparsification of the spectrogram leads to a sharp separation between time-frequency clusters which makes it easier to identify intrinsic modes,and thus leads to a new data-driven mode decomposition.The applications include signal representation,outlier removal,and mode decomposition.On benchmark tests,we show that our approach outperforms other state-of-the-art decomposition methods.展开更多
Designing a sparse array with reduced transmit/receive modules(TRMs)is vital for some applications where the antenna system’s size,weight,allowed operating space,and cost are limited.Sparse arrays exhibit distinct ar...Designing a sparse array with reduced transmit/receive modules(TRMs)is vital for some applications where the antenna system’s size,weight,allowed operating space,and cost are limited.Sparse arrays exhibit distinct architectures,roughly classified into three categories:Thinned arrays,nonuniformly spaced arrays,and clustered arrays.While numerous advanced synthesis methods have been presented for the three types of sparse arrays in recent years,a comprehensive review of the latest development in sparse array synthesis is lacking.This work aims to fill this gap by thoroughly summarizing these techniques.The study includes synthesis examples to facilitate a comparative analysis of different techniques in terms of both accuracy and efficiency.Thus,this review is intended to assist researchers and engineers in related fields,offering a clear understanding of the development and distinctions among sparse array synthesis techniques.展开更多
In practice,simultaneous impact localization and time history reconstruction can hardly be achieved,due to the illposed and under-determined problems induced by the constrained and harsh measuring conditions.Although ...In practice,simultaneous impact localization and time history reconstruction can hardly be achieved,due to the illposed and under-determined problems induced by the constrained and harsh measuring conditions.Although l_(1) regularization can be used to obtain sparse solutions,it tends to underestimate solution amplitudes as a biased estimator.To address this issue,a novel impact force identification method with l_(p) regularization is proposed in this paper,using the alternating direction method of multipliers(ADMM).By decomposing the complex primal problem into sub-problems solvable in parallel via proximal operators,ADMM can address the challenge effectively.To mitigate the sensitivity to regularization parameters,an adaptive regularization parameter is derived based on the K-sparsity strategy.Then,an ADMM-based sparse regularization method is developed,which is capable of handling l_(p) regularization with arbitrary p values using adaptively-updated parameters.The effectiveness and performance of the proposed method are validated on an aircraft skin-like composite structure.Additionally,an investigation into the optimal p value for achieving high-accuracy solutions via l_(p) regularization is conducted.It turns out that l_(0.6)regularization consistently yields sparser and more accurate solutions for impact force identification compared to the classic l_(1) regularization method.The impact force identification method proposed in this paper can simultaneously reconstruct impact time history with high accuracy and accurately localize the impact using an under-determined sensor configuration.展开更多
To address the seismic face stability challenges encountered in urban and subsea tunnel construction,an efficient probabilistic analysis framework for shield tunnel faces under seismic conditions is proposed.Based on ...To address the seismic face stability challenges encountered in urban and subsea tunnel construction,an efficient probabilistic analysis framework for shield tunnel faces under seismic conditions is proposed.Based on the upper-bound theory of limit analysis,an improved three-dimensional discrete deterministic mechanism,accounting for the heterogeneous nature of soil media,is formulated to evaluate seismic face stability.The metamodel of failure probabilistic assessments for seismic tunnel faces is constructed by integrating the sparse polynomial chaos expansion method(SPCE)with the modified pseudo-dynamic approach(MPD).The improved deterministic model is validated by comparing with published literature and numerical simulations results,and the SPCE-MPD metamodel is examined with the traditional MCS method.Based on the SPCE-MPD metamodels,the seismic effects on face failure probability and reliability index are presented and the global sensitivity analysis(GSA)is involved to reflect the influence order of seismic action parameters.Finally,the proposed approach is tested to be effective by a engineering case of the Chengdu outer ring tunnel.The results show that higher uncertainty of seismic response on face stability should be noticed in areas with intense earthquakes and variation of seismic wave velocity has the most profound influence on tunnel face stability.展开更多
This study introduces a pre-orthogonal adaptive Fourier decomposition(POAFD)to obtain approximations and numerical solutions to the fractional Laplacian initial value problem and the extension problem of Caffarelli an...This study introduces a pre-orthogonal adaptive Fourier decomposition(POAFD)to obtain approximations and numerical solutions to the fractional Laplacian initial value problem and the extension problem of Caffarelli and Silvestre(generalized Poisson equation).As a first step,the method expands the initial data function into a sparse series of the fundamental solutions with fast convergence,and,as a second step,makes use of the semigroup or the reproducing kernel property of each of the expanding entries.Experiments show the effectiveness and efficiency of the proposed series solutions.展开更多
基金We would like to thank the associate editor and the reviewers for their constructive comments.This work was supported in part by the National Natural Science Foundation of China under Grant 62203234in part by the State Key Laboratory of Robotics of China under Grant 2023-Z03+1 种基金in part by the Natural Science Foundation of Liaoning Province under Grant 2023-BS-025in part by the Research Program of Liaoning Liaohe Laboratory under Grant LLL23ZZ-02-02.
文摘High-precision and real-time diagnosis of sucker rod pumping system(SRPS)is important for quickly mastering oil well operations.Deep learning-based method for classifying the dynamometer card(DC)of oil wells is an efficient diagnosis method.However,the input of the DC as a two-dimensional image into the deep learning framework suffers from low feature utilization and high computational effort.Additionally,different SRPSs in an oil field have various system parameters,and the same SRPS generates different DCs at different moments.Thus,there is heterogeneity in field data,which can dramatically impair the diagnostic accuracy.To solve the above problems,a working condition recognition method based on 4-segment time-frequency signature matrix(4S-TFSM)and deep learning is presented in this paper.First,the 4-segment time-frequency signature(4S-TFS)method that can reduce the computing power requirements is proposed for feature extraction of DC data.Subsequently,the 4S-TFSM is constructed by relative normalization and matrix calculation to synthesize the features of multiple data and solve the problem of data heterogeneity.Finally,a convolutional neural network(CNN),one of the deep learning frameworks,is used to determine the functioning conditions based on the 4S-TFSM.Experiments on field data verify that the proposed diagnostic method based on 4S-TFSM and CNN(4S-TFSM-CNN)can significantly improve the accuracy of working condition recognition with lower computational cost.To the best of our knowledge,this is the first work to discuss the effect of data heterogeneity on the working condition recognition performance of SRPS.
基金Supported by the National Science Foundation of China(42055402)。
文摘The conventional linear time-frequency analysis method cannot achieve high resolution and energy focusing in the time and frequency dimensions at the same time,especially in the low frequency region.In order to improve the resolution of the linear time-frequency analysis method in the low-frequency region,we have proposed a W transform method,in which the instantaneous frequency is introduced as a parameter into the linear transformation,and the analysis time window is constructed which matches the instantaneous frequency of the seismic data.In this paper,the W transform method is compared with the Wigner-Ville distribution(WVD),a typical nonlinear time-frequency analysis method.The WVD method that shows the energy distribution in the time-frequency domain clearly indicates the gravitational center of time and the gravitational center of frequency of a wavelet,while the time-frequency spectrum of the W transform also has a clear gravitational center of energy focusing,because the instantaneous frequency corresponding to any time position is introduced as the transformation parameter.Therefore,the W transform can be benchmarked directly by the WVD method.We summarize the development of the W transform and three improved methods in recent years,and elaborate on the evolution of the standard W transform,the chirp-modulated W transform,the fractional-order W transform,and the linear canonical W transform.Through three application examples of W transform in fluvial sand body identification and reservoir prediction,it is verified that W transform can improve the resolution and energy focusing of time-frequency spectra.
基金funded by the National Basic Research Program of China(973 Program)(No.2011 CB201002)the National Natural Science Foundation of China(No.41374117)the great and special projects(2011ZX05005–005-008HZ and 2011ZX05006-002)
文摘The Gabor and S transforms are frequently used in time-frequency decomposition methods. Constrained by the uncertainty principle, both transforms produce low-resolution time-frequency decomposition results in the time and frequency domains. To improve the resolution of the time-frequency decomposition results, we use the instantaneous frequency distribution function(IFDF) to express the seismic signal. When the instantaneous frequencies of the nonstationary signal satisfy the requirements of the uncertainty principle, the support of IFDF is just the support of the amplitude ridges in the signal obtained using the short-time Fourier transform. Based on this feature, we propose a new iteration algorithm to achieve the sparse time-frequency decomposition of the signal. The iteration algorithm uses the support of the amplitude ridges of the residual signal obtained with the short-time Fourier transform to update the time-frequency components of the signal. The summation of the updated time-frequency components in each iteration is the result of the sparse timefrequency decomposition. Numerical examples show that the proposed method improves the resolution of the time-frequency decomposition results and the accuracy of the analysis of the nonstationary signal. We also use the proposed method to attenuate the ground roll of field seismic data with good results.
基金supported by the National Natural Science Foundation of China(61072120)
文摘This paper deals with the blind separation of nonstation-ary sources and direction-of-arrival (DOA) estimation in the under-determined case, when there are more sources than sensors. We assume the sources to be time-frequency (TF) disjoint to a certain extent. In particular, the number of sources presented at any TF neighborhood is strictly less than that of sensors. We can identify the real number of active sources and achieve separation in any TF neighborhood by the sparse representation method. Compared with the subspace-based algorithm under the same sparseness assumption, which suffers from the extra noise effect since it can-not estimate the true number of active sources, the proposed algorithm can estimate the number of active sources and their cor-responding TF values in any TF neighborhood simultaneously. An-other contribution of this paper is a new estimation procedure for the DOA of sources in the underdetermined case, which combines the TF sparseness of sources and the clustering technique. Sim-ulation results demonstrate the validity and high performance of the proposed algorithm in both blind source separation (BSS) and DOA estimation.
基金supported by the National Natural Science Foundation of China(611011726137118461301262)
文摘This paper proposes a new method for estimating the parameter of maneuvering targets based on sparse time-frequency transform in over-the-horizon radar(OTHR). In this method, the sparse time-frequency distribution of the radar echo is obtained by solving a sparse optimization problem based on the short-time Fourier transform. Then Hough transform is employed to estimate the parameter of the targets. The proposed algorithm has the following advantages: Compared with the Wigner-Hough transform method, the computational complexity of the sparse optimization is low due to the application of fast Fourier transform(FFT). And the computational cost of Hough transform is also greatly reduced because of the sparsity of the time-frequency distribution. Compared with the high order ambiguity function(HAF) method, the proposed method improves in terms of precision and robustness to noise. Simulation results show that compared with the HAF method, the required SNR and relative mean square error are 8 dB lower and 50 dB lower respectively in the proposed method. While processing the field experiment data, the execution time of Hough transform in the proposed method is only 4% of the Wigner-Hough transform method.
基金Project(51675262)supported by the National Natural Science Foundation of ChinaProject(2016YFD0700800)supported by the National Key Research and Development Program of China+2 种基金Project(6140210020102)supported by the Advance Research Field Fund Project of ChinaProject(NP2018304)supported by the Fundamental Research Funds for the Central Universities,ChinaProject(2017-IV-0008-0045)supported by the National Science and Technology Major Project
文摘Modern agricultural mechanization has put forward higher requirements for the intelligent defect diagnosis.However,the fault features are usually learned and classified under all speeds without considering the effects of speed fluctuation.To overcome this deficiency,a novel intelligent defect detection framework based on time-frequency transformation is presented in this work.In the framework,the samples under one speed are employed for training sparse filtering model,and the remaining samples under different speeds are adopted for testing the effectiveness.Our proposed approach contains two stages:1)the time-frequency domain signals are acquired from the mechanical raw vibration data by the short time Fourier transform algorithm,and then the defect features are extracted from time-frequency domain signals by sparse filtering algorithm;2)different defect types are classified by the softmax regression using the defect features.The proposed approach can be employed to mine available fault characteristics adaptively and is an effective intelligent method for fault detection of agricultural equipment.The fault detection performances confirm that our approach not only owns strong ability for fault classification under different speeds,but also obtains higher identification accuracy than the other methods.
基金The National Key Research and Development Program of China:Design and Key Technology Research of Non-metallic Flexible Risers for Deep Sea Mining(2022YFC2803701)The General Program of National Natural Science Foundation of China(52071336,52374022).
文摘Since leaks in high-pressure pipelines transporting crude oil can cause severe economic losses,a reliable leak risk assessment can assist in developing an effective pipeline maintenance plan and avoiding unexpected incidents.The fast and accurate leak detection methods are essential for maintaining pipeline safety in pipeline reliability engineering.Current oil pipeline leakage signals are insufficient for feature extraction,while the training time for traditional leakage prediction models is too long.A new leak detection method is proposed based on time-frequency features and the Genetic Algorithm-Levenberg Marquardt(GA-LM)classification model for predicting the leakage status of oil pipelines.The signal that has been processed is transformed to the time and frequency domain,allowing full expression of the original signal.The traditional Back Propagation(BP)neural network is optimized by the Genetic Algorithm(GA)and Levenberg Marquardt(LM)algorithms.The results show that the recognition effect of a combined feature parameter is superior to that of a single feature parameter.The Accuracy,Precision,Recall,and F1score of the GA-LM model is 95%,93.5%,96.7%,and 95.1%,respectively,which proves that the GA-LM model has a good predictive effect and excellent stability for positive and negative samples.The proposed GA-LM model can obviously reduce training time and improve recognition efficiency.In addition,considering that a large number of samples are required for model training,a wavelet threshold method is proposed to generate sample data with higher reliability.The research results can provide an effective theoretical and technical reference for the leakage risk assessment of the actual oil pipelines.
基金This work was funded by Beijing Key Laboratory of Distribution Transformer Energy-Saving Technology(China Electric Power Research Institute).
文摘The load types in low-voltage distribution systems are diverse.Some loads have current signals that are similar to series fault arcs,making it difficult to effectively detect fault arcs during their occurrence and sustained combustion,which can easily lead to serious electrical fire accidents.To address this issue,this paper establishes a fault arc prototype experimental platform,selects multiple commonly used loads for fault arc experiments,and collects data in both normal and fault states.By analyzing waveform characteristics and selecting fault discrimination feature indicators,corresponding feature values are extracted for qualitative analysis to explore changes in timefrequency characteristics of current before and after faults.Multiple features are then selected to form a multidimensional feature vector space to effectively reduce arc misjudgments and construct a fault discrimination feature database.Based on this,a fault arc hazard prediction model is built using random forests.The model’s multiple hyperparameters are simultaneously optimized through grid search,aiming tominimize node information entropy and complete model training,thereby enhancing model robustness and generalization ability.Through experimental verification,the proposed method accurately predicts and classifies fault arcs of different load types,with an average accuracy at least 1%higher than that of the commonly used fault predictionmethods compared in the paper.
基金supported by the Scientific Research Project of Xiang Jiang Lab(22XJ02003)the University Fundamental Research Fund(23-ZZCX-JDZ-28)+5 种基金the National Science Fund for Outstanding Young Scholars(62122093)the National Natural Science Foundation of China(72071205)the Hunan Graduate Research Innovation Project(ZC23112101-10)the Hunan Natural Science Foundation Regional Joint Project(2023JJ50490)the Science and Technology Project for Young and Middle-aged Talents of Hunan(2023TJ-Z03)the Science and Technology Innovation Program of Humnan Province(2023RC1002)。
文摘Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.As a result,many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately.Nevertheless,existing optimizers often focus on locating non-zero variable posi-tions to optimize the binary variables Mask.However,approxi-mating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized.In data mining,it is common to mine frequent itemsets appear-ing together in a dataset to reveal the correlation between data.Inspired by this,we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets(TELSO)to address these SLMOPs.TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence.Experi-mental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms(SLMOEAs)in terms of performance and convergence speed.
基金support by the Open Project of Xiangjiang Laboratory(22XJ02003)the University Fundamental Research Fund(23-ZZCX-JDZ-28,ZK21-07)+5 种基金the National Science Fund for Outstanding Young Scholars(62122093)the National Natural Science Foundation of China(72071205)the Hunan Graduate Research Innovation Project(CX20230074)the Hunan Natural Science Foundation Regional Joint Project(2023JJ50490)the Science and Technology Project for Young and Middle-aged Talents of Hunan(2023TJZ03)the Science and Technology Innovation Program of Humnan Province(2023RC1002).
文摘Sparse large-scale multi-objective optimization problems(SLMOPs)are common in science and engineering.However,the large-scale problem represents the high dimensionality of the decision space,requiring algorithms to traverse vast expanse with limited computational resources.Furthermore,in the context of sparse,most variables in Pareto optimal solutions are zero,making it difficult for algorithms to identify non-zero variables efficiently.This paper is dedicated to addressing the challenges posed by SLMOPs.To start,we introduce innovative objective functions customized to mine maximum and minimum candidate sets.This substantial enhancement dramatically improves the efficacy of frequent pattern mining.In this way,selecting candidate sets is no longer based on the quantity of nonzero variables they contain but on a higher proportion of nonzero variables within specific dimensions.Additionally,we unveil a novel approach to association rule mining,which delves into the intricate relationships between non-zero variables.This novel methodology aids in identifying sparse distributions that can potentially expedite reductions in the objective function value.We extensively tested our algorithm across eight benchmark problems and four real-world SLMOPs.The results demonstrate that our approach achieves competitive solutions across various challenges.
基金supported by Natural Science Foundation of China(62071262)the K.C.Wong Magna Fund at Ningbo University.
文摘In this paper,we reconstruct strongly-decaying block sparse signals by the block generalized orthogonal matching pursuit(BgOMP)algorithm in the l2-bounded noise case.Under some restraints on the minimum magnitude of the nonzero elements of the strongly-decaying block sparse signal,if the sensing matrix satisfies the the block restricted isometry property(block-RIP),then arbitrary strongly-decaying block sparse signals can be accurately and steadily reconstructed by the BgOMP algorithm in iterations.Furthermore,we conjecture that this condition is sharp.
基金supported in part by NUS startup grantthe National Natural Science Foundation of China (52076037)。
文摘Although many multi-view clustering(MVC) algorithms with acceptable performances have been presented, to the best of our knowledge, nearly all of them need to be fed with the correct number of clusters. In addition, these existing algorithms create only the hard and fuzzy partitions for multi-view objects,which are often located in highly-overlapping areas of multi-view feature space. The adoption of hard and fuzzy partition ignores the ambiguity and uncertainty in the assignment of objects, likely leading to performance degradation. To address these issues, we propose a novel sparse reconstructive multi-view evidential clustering algorithm(SRMVEC). Based on a sparse reconstructive procedure, SRMVEC learns a shared affinity matrix across views, and maps multi-view objects to a 2-dimensional humanreadable chart by calculating 2 newly defined mathematical metrics for each object. From this chart, users can detect the number of clusters and select several objects existing in the dataset as cluster centers. Then, SRMVEC derives a credal partition under the framework of evidence theory, improving the fault tolerance of clustering. Ablation studies show the benefits of adopting the sparse reconstructive procedure and evidence theory. Besides,SRMVEC delivers effectiveness on benchmark datasets by outperforming some state-of-the-art methods.
基金the National Key Research and Devecopment Program of China(No.2022YFB4501601)the National Natural Science Foundation of China(No.62102398,U20A20227,62222214,62002338,U22A2028,U19B2019)+1 种基金the Chinese Academy of Sciences Project for Young Scientists in Basic Research(YSBR-029)Youth Innovation Promotion Association Chinese Academy of Sciences。
文摘Quantized training has been proven to be a prominent method to achieve deep neural network training under limited computational resources.It uses low bit-width arithmetics with a proper scaling factor to achieve negligible accuracy loss.Cambricon-Q is the ASIC design proposed to efficiently support quantized training,and achieves significant performance improvement.However,there are still two caveats in the design.First,Cambricon-Q with different hardware specifications may lead to different numerical errors,resulting in non-reproducible behaviors which may become a major concern in critical applications.Second,Cambricon-Q cannot leverage data sparsity,where considerable cycles could still be squeezed out.To address the caveats,the acceleration core of Cambricon-Q is redesigned to support fine-grained irregular data processing.The new design not only enables acceleration on sparse data,but also enables performing local dynamic quantization by contiguous value ranges(which is hardware independent),instead of contiguous addresses(which is dependent on hardware factors).Experimental results show that the accuracy loss of the method still keeps negligible,and the accelerator achieves 1.61×performance improvement over Cambricon-Q,with about 10%energy increase.
文摘Passive detection of low-slow-small(LSS)targets is easily interfered by direct signal and multipath clutter,and the traditional clutter suppression method has the contradiction between step size and convergence rate.In this paper,a frequency domain clutter suppression algorithm based on sparse adaptive filtering is proposed.The pulse compression operation between the error signal and the input reference signal is added to the cost function as a sparsity constraint,and the criterion for filter weight updating is improved to obtain a purer echo signal.At the same time,the step size and penalty factor are brought into the adaptive iteration process,and the input data is used to drive the adaptive changes of parameters such as step size.The proposed algorithm has a small amount of calculation,which improves the robustness to parameters such as step size,reduces the weight error of the filter and has a good clutter suppression performance.
基金supported in part by the NSERC RGPIN 50503-10842supported in part by the AFOSR MURI FA9550-21-1-0084the NSF DMS-1752116.
文摘Signal decomposition and multiscale signal analysis provide many useful tools for timefrequency analysis.We proposed a random feature method for analyzing time-series data by constructing a sparse approximation to the spectrogram.The randomization is both in the time window locations and the frequency sampling,which lowers the overall sampling and computational cost.The sparsification of the spectrogram leads to a sharp separation between time-frequency clusters which makes it easier to identify intrinsic modes,and thus leads to a new data-driven mode decomposition.The applications include signal representation,outlier removal,and mode decomposition.On benchmark tests,we show that our approach outperforms other state-of-the-art decomposition methods.
基金supported by the National Natural Science Foundation of China under Grant No.U2341208.
文摘Designing a sparse array with reduced transmit/receive modules(TRMs)is vital for some applications where the antenna system’s size,weight,allowed operating space,and cost are limited.Sparse arrays exhibit distinct architectures,roughly classified into three categories:Thinned arrays,nonuniformly spaced arrays,and clustered arrays.While numerous advanced synthesis methods have been presented for the three types of sparse arrays in recent years,a comprehensive review of the latest development in sparse array synthesis is lacking.This work aims to fill this gap by thoroughly summarizing these techniques.The study includes synthesis examples to facilitate a comparative analysis of different techniques in terms of both accuracy and efficiency.Thus,this review is intended to assist researchers and engineers in related fields,offering a clear understanding of the development and distinctions among sparse array synthesis techniques.
基金Supported by National Natural Science Foundation of China (Grant Nos.52305127,52075414)China Postdoctoral Science Foundation (Grant No.2021M702595)。
文摘In practice,simultaneous impact localization and time history reconstruction can hardly be achieved,due to the illposed and under-determined problems induced by the constrained and harsh measuring conditions.Although l_(1) regularization can be used to obtain sparse solutions,it tends to underestimate solution amplitudes as a biased estimator.To address this issue,a novel impact force identification method with l_(p) regularization is proposed in this paper,using the alternating direction method of multipliers(ADMM).By decomposing the complex primal problem into sub-problems solvable in parallel via proximal operators,ADMM can address the challenge effectively.To mitigate the sensitivity to regularization parameters,an adaptive regularization parameter is derived based on the K-sparsity strategy.Then,an ADMM-based sparse regularization method is developed,which is capable of handling l_(p) regularization with arbitrary p values using adaptively-updated parameters.The effectiveness and performance of the proposed method are validated on an aircraft skin-like composite structure.Additionally,an investigation into the optimal p value for achieving high-accuracy solutions via l_(p) regularization is conducted.It turns out that l_(0.6)regularization consistently yields sparser and more accurate solutions for impact force identification compared to the classic l_(1) regularization method.The impact force identification method proposed in this paper can simultaneously reconstruct impact time history with high accuracy and accurately localize the impact using an under-determined sensor configuration.
基金Project([2018]3010)supported by the Guizhou Provincial Science and Technology Major Project,China。
文摘To address the seismic face stability challenges encountered in urban and subsea tunnel construction,an efficient probabilistic analysis framework for shield tunnel faces under seismic conditions is proposed.Based on the upper-bound theory of limit analysis,an improved three-dimensional discrete deterministic mechanism,accounting for the heterogeneous nature of soil media,is formulated to evaluate seismic face stability.The metamodel of failure probabilistic assessments for seismic tunnel faces is constructed by integrating the sparse polynomial chaos expansion method(SPCE)with the modified pseudo-dynamic approach(MPD).The improved deterministic model is validated by comparing with published literature and numerical simulations results,and the SPCE-MPD metamodel is examined with the traditional MCS method.Based on the SPCE-MPD metamodels,the seismic effects on face failure probability and reliability index are presented and the global sensitivity analysis(GSA)is involved to reflect the influence order of seismic action parameters.Finally,the proposed approach is tested to be effective by a engineering case of the Chengdu outer ring tunnel.The results show that higher uncertainty of seismic response on face stability should be noticed in areas with intense earthquakes and variation of seismic wave velocity has the most profound influence on tunnel face stability.
基金supported by the Science and Technology Development Fund of Macao SAR(FDCT0128/2022/A,0020/2023/RIB1,0111/2023/AFJ,005/2022/ALC)the Shandong Natural Science Foundation of China(ZR2020MA004)+2 种基金the National Natural Science Foundation of China(12071272)the MYRG 2018-00168-FSTZhejiang Provincial Natural Science Foundation of China(LQ23A010014).
文摘This study introduces a pre-orthogonal adaptive Fourier decomposition(POAFD)to obtain approximations and numerical solutions to the fractional Laplacian initial value problem and the extension problem of Caffarelli and Silvestre(generalized Poisson equation).As a first step,the method expands the initial data function into a sparse series of the fundamental solutions with fast convergence,and,as a second step,makes use of the semigroup or the reproducing kernel property of each of the expanding entries.Experiments show the effectiveness and efficiency of the proposed series solutions.