As process technology development,model order reduction( MOR) has been regarded as a useful tool in analysis of on-chip interconnects. We propose a weighted self-adaptive threshold wavelet interpolation MOR method on ...As process technology development,model order reduction( MOR) has been regarded as a useful tool in analysis of on-chip interconnects. We propose a weighted self-adaptive threshold wavelet interpolation MOR method on account of Krylov subspace techniques. The interpolation points are selected by Haar wavelet using weighted self-adaptive threshold methods dynamically. Through the analyses of different types of circuits in very large scale integration( VLSI),the results show that the method proposed in this paper can be more accurate and efficient than Krylov subspace method of multi-shift expansion point using Haar wavelet that are no weighted self-adaptive threshold application in interest frequency range,and more accurate than Krylov subspace method of multi-shift expansion point based on the uniform interpolation point.展开更多
We consider n observations from the GARCH-type model: Z = UY, where U and Y are independent random variables. We aim to estimate density function Y where Y have a weighted distribution. We determine a sharp upper boun...We consider n observations from the GARCH-type model: Z = UY, where U and Y are independent random variables. We aim to estimate density function Y where Y have a weighted distribution. We determine a sharp upper bound of the associated mean integrated square error. We also make use of the measure of expected true evidence, so as to determine when model leads to a crisis and causes data to be lost.展开更多
In this paper we use wavelets to characterize weighted Triebel-Lizorkin spaces. Our weights belong to the Muckenhoupt class A q and our weighted Triebel-Lizorkin spaces are weighted atomic Triebel-Lizorkin spaces.
In modern electromagnetic environment, radar emitter signal recognition is an important research topic. On the basis of multi-resolution wavelet analysis, an adaptive radar emitter signal recognition method based on m...In modern electromagnetic environment, radar emitter signal recognition is an important research topic. On the basis of multi-resolution wavelet analysis, an adaptive radar emitter signal recognition method based on multi-scale wavelet entropy feature extraction and feature weighting was proposed. With the only priori knowledge of signal to noise ratio(SNR), the method of extracting multi-scale wavelet entropy features of wavelet coefficients from different received signals were combined with calculating uneven weight factor and stability weight factor of the extracted multi-dimensional characteristics. Radar emitter signals of different modulation types and different parameters modulated were recognized through feature weighting and feature fusion. Theoretical analysis and simulation results show that the presented algorithm has a high recognition rate. Additionally, when the SNR is greater than-4 d B, the correct recognition rate is higher than 93%. Hence, the proposed algorithm has great application value.展开更多
Multiresolution analysis of wavelet theory can give an effective way to describe the information at various levels of approximations or different resolutions, based on spline wavelet analysis,so weight function is ort...Multiresolution analysis of wavelet theory can give an effective way to describe the information at various levels of approximations or different resolutions, based on spline wavelet analysis,so weight function is orthonormally projected onto a sequence of closed spline subspaces, and is viewed at various levels of approximations or different resolutions. Now, the useful new way to research weight function is found, and the numerical result is given.展开更多
Seismic wave velocity is one of the most important processing parameters of seismic data,which also determines the accuracy of imaging.The conventional method of velocity analysis involves scanning through a series of...Seismic wave velocity is one of the most important processing parameters of seismic data,which also determines the accuracy of imaging.The conventional method of velocity analysis involves scanning through a series of equal intervals of velocity,producing the velocity spectrum by superposing energy or similarity coefficients.In this method,however,the sensitivity of the semblance spectrum to change of velocity is weak,so the resolution is poor.In this paper,to solve the above deficiencies of conventional velocity analysis,a method for obtaining a high-resolution velocity spectrum based on weighted similarity is proposed.By introducing two weighting functions,the resolution of the similarity spectrum in time and velocity is improved.Numerical examples and real seismic data indicate that the proposed method provides a velocity spectrum with higher resolution than conventional methods and can separate cross reflectors which are aliased in conventional semblance spectrums;at the same time,the method shows good noise-resistibility.展开更多
Robust watermarking requires finding invariant features under multiple attacks to ensure correct extraction.Deep learning has extremely powerful in extracting features,and watermarking algorithms based on deep learnin...Robust watermarking requires finding invariant features under multiple attacks to ensure correct extraction.Deep learning has extremely powerful in extracting features,and watermarking algorithms based on deep learning have attracted widespread attention.Most existing methods use 3×3 small kernel convolution to extract image features and embed the watermarking.However,the effective perception fields for small kernel convolution are extremely confined,so the pixels that each watermarking can affect are restricted,thus limiting the performance of the watermarking.To address these problems,we propose a watermarking network based on large kernel convolution and adaptive weight assignment for loss functions.It uses large-kernel depth-wise convolution to extract features for learning large-scale image information and subsequently projects the watermarking into a highdimensional space by 1×1 convolution to achieve adaptability in the channel dimension.Subsequently,the modification of the embedded watermarking on the cover image is extended to more pixels.Because the magnitude and convergence rates of each loss function are different,an adaptive loss weight assignment strategy is proposed to make theweights participate in the network training together and adjust theweight dynamically.Further,a high-frequency wavelet loss is proposed,by which the watermarking is restricted to only the low-frequency wavelet sub-bands,thereby enhancing the robustness of watermarking against image compression.The experimental results show that the peak signal-to-noise ratio(PSNR)of the encoded image reaches 40.12,the structural similarity(SSIM)reaches 0.9721,and the watermarking has good robustness against various types of noise.展开更多
We have constructed a compactly supported biorthogonal wavelet that approximates the modulation transfer function (MTF) of human visual system in the frequency domain. In this paper, we evaluate performance of the con...We have constructed a compactly supported biorthogonal wavelet that approximates the modulation transfer function (MTF) of human visual system in the frequency domain. In this paper, we evaluate performance of the constructed wavelet, and compare it with the widely used Daubechies 9 7, Daubechies 9 3 and GBCW 9 7 wavelets. The result shows that coding performance of the constructed wavelet is better than Daubechies 9 3, and is competitive with Daubechies 9 7 and GBCW 9 7 wavelets. Like Daubechies 9 3 wavelet, the filter coefficients of the constructed wavelet are all dyadic fractions, and the tap is less than Daubechies 9 7 and GBCW 9 7. It has an attractive feature in the realization of discrete wavelet transform.展开更多
There has been a lot of research has been performed regarding diagnosing rolling element bearing faults using wavelet analysis, but almost all methods are not ideal for picking up fault signal characteristics under st...There has been a lot of research has been performed regarding diagnosing rolling element bearing faults using wavelet analysis, but almost all methods are not ideal for picking up fault signal characteristics under strong noise. Therefore, this paper proposes auto-correlation, cross-correlation and weighted average fault diagnosis methods based on wavelet transform (WT) de-noising which combine correlation analysis with WT for the first time. These three methods compute the auto-correlation, the cross-correlation and the weighted average of the measured vibration signals, then de-noise by thresholding and computing the auto-correlation of de-noised coefficients of WT and FFT of energy sequence. The simulation results indicate that all methods enhance the capabilities of fault diagnosis of rolling bearings and pick up the fault characteristics effectively.展开更多
A new model identification method of hydraulic flight simulator adopting improved panicle swarm optimization (PSO) and wavelet analysis is proposed for achieving higher identification precision. Input-output data of...A new model identification method of hydraulic flight simulator adopting improved panicle swarm optimization (PSO) and wavelet analysis is proposed for achieving higher identification precision. Input-output data of hydraulic flight simulator were decomposed by wavelet muhiresolution to get the information of different frequency bands. The reconstructed input-output data were used to build the model of hydraulic flight simulator with improved particle swarm optimization with mutation (IPSOM) to avoid the premature convergence of traditional optimization techniques effectively. Simulation results show that the proposed method is more precise than traditional system identification methods in operating frequency bands because of the consideration of design index of control system for identification.展开更多
Linear Discriminant Analysis (LDA) is one of the principal techniques used in face recognition systems. LDA is well-known scheme for feature extraction and dimension reduction. It provides improved performance over ...Linear Discriminant Analysis (LDA) is one of the principal techniques used in face recognition systems. LDA is well-known scheme for feature extraction and dimension reduction. It provides improved performance over the standard Principal Component Analysis (PCA) method of face recognition by introducing the concept of classes and distance between classes. This paper provides an overview of PCA, the various variants of LDA and their basic drawbacks. The paper also has proposed a development over classical LDA, i.e., LDA using wavelets transform approach that enhances performance as regards accuracy and time complexity. Experiments on ORL face database clearly demonstrate this and the graphical comparison of the algorithms clearly showcases the improved recognition rate in case of the proposed algorithm.展开更多
基金Sponsored by the Fundamental Research Funds for the Central Universities(Grant No.HIT.NSRIF.2016107)the China Postdoctoral Science Foundation(Grant No.2015M581447)
文摘As process technology development,model order reduction( MOR) has been regarded as a useful tool in analysis of on-chip interconnects. We propose a weighted self-adaptive threshold wavelet interpolation MOR method on account of Krylov subspace techniques. The interpolation points are selected by Haar wavelet using weighted self-adaptive threshold methods dynamically. Through the analyses of different types of circuits in very large scale integration( VLSI),the results show that the method proposed in this paper can be more accurate and efficient than Krylov subspace method of multi-shift expansion point using Haar wavelet that are no weighted self-adaptive threshold application in interest frequency range,and more accurate than Krylov subspace method of multi-shift expansion point based on the uniform interpolation point.
文摘We consider n observations from the GARCH-type model: Z = UY, where U and Y are independent random variables. We aim to estimate density function Y where Y have a weighted distribution. We determine a sharp upper bound of the associated mean integrated square error. We also make use of the measure of expected true evidence, so as to determine when model leads to a crisis and causes data to be lost.
基金The projectsupported by NSF of China and the Foundation of Advanced Research Center of Zhongshan Universi-ty
文摘In this paper we use wavelets to characterize weighted Triebel-Lizorkin spaces. Our weights belong to the Muckenhoupt class A q and our weighted Triebel-Lizorkin spaces are weighted atomic Triebel-Lizorkin spaces.
基金Project(61301095)supported by the National Natural Science Foundation of ChinaProject(QC2012C070)supported by Heilongjiang Provincial Natural Science Foundation for the Youth,ChinaProjects(HEUCF130807,HEUCFZ1129)supported by the Fundamental Research Funds for the Central Universities of China
文摘In modern electromagnetic environment, radar emitter signal recognition is an important research topic. On the basis of multi-resolution wavelet analysis, an adaptive radar emitter signal recognition method based on multi-scale wavelet entropy feature extraction and feature weighting was proposed. With the only priori knowledge of signal to noise ratio(SNR), the method of extracting multi-scale wavelet entropy features of wavelet coefficients from different received signals were combined with calculating uneven weight factor and stability weight factor of the extracted multi-dimensional characteristics. Radar emitter signals of different modulation types and different parameters modulated were recognized through feature weighting and feature fusion. Theoretical analysis and simulation results show that the presented algorithm has a high recognition rate. Additionally, when the SNR is greater than-4 d B, the correct recognition rate is higher than 93%. Hence, the proposed algorithm has great application value.
基金theNationalNaturalScienceFoundationofChina (No .50 40 90 0 8)
文摘Multiresolution analysis of wavelet theory can give an effective way to describe the information at various levels of approximations or different resolutions, based on spline wavelet analysis,so weight function is orthonormally projected onto a sequence of closed spline subspaces, and is viewed at various levels of approximations or different resolutions. Now, the useful new way to research weight function is found, and the numerical result is given.
基金funded by the National Key Research and Development Plan (No. 2017YFB0202905)China Petroleum Corporation Technology Management Department “Deep-ultra-deep weak signal enhancement technology based on seismic physical simulation experiments”(No. 2017-5307073-000008-01)。
文摘Seismic wave velocity is one of the most important processing parameters of seismic data,which also determines the accuracy of imaging.The conventional method of velocity analysis involves scanning through a series of equal intervals of velocity,producing the velocity spectrum by superposing energy or similarity coefficients.In this method,however,the sensitivity of the semblance spectrum to change of velocity is weak,so the resolution is poor.In this paper,to solve the above deficiencies of conventional velocity analysis,a method for obtaining a high-resolution velocity spectrum based on weighted similarity is proposed.By introducing two weighting functions,the resolution of the similarity spectrum in time and velocity is improved.Numerical examples and real seismic data indicate that the proposed method provides a velocity spectrum with higher resolution than conventional methods and can separate cross reflectors which are aliased in conventional semblance spectrums;at the same time,the method shows good noise-resistibility.
基金supported,in part,by the National Nature Science Foundation of China under grant numbers 62272236in part,by the Natural Science Foundation of Jiangsu Province under grant numbers BK20201136,BK20191401in part,by the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD)fund.
文摘Robust watermarking requires finding invariant features under multiple attacks to ensure correct extraction.Deep learning has extremely powerful in extracting features,and watermarking algorithms based on deep learning have attracted widespread attention.Most existing methods use 3×3 small kernel convolution to extract image features and embed the watermarking.However,the effective perception fields for small kernel convolution are extremely confined,so the pixels that each watermarking can affect are restricted,thus limiting the performance of the watermarking.To address these problems,we propose a watermarking network based on large kernel convolution and adaptive weight assignment for loss functions.It uses large-kernel depth-wise convolution to extract features for learning large-scale image information and subsequently projects the watermarking into a highdimensional space by 1×1 convolution to achieve adaptability in the channel dimension.Subsequently,the modification of the embedded watermarking on the cover image is extended to more pixels.Because the magnitude and convergence rates of each loss function are different,an adaptive loss weight assignment strategy is proposed to make theweights participate in the network training together and adjust theweight dynamically.Further,a high-frequency wavelet loss is proposed,by which the watermarking is restricted to only the low-frequency wavelet sub-bands,thereby enhancing the robustness of watermarking against image compression.The experimental results show that the peak signal-to-noise ratio(PSNR)of the encoded image reaches 40.12,the structural similarity(SSIM)reaches 0.9721,and the watermarking has good robustness against various types of noise.
基金ProjectsupportedbytheNationalNaturalScienceFoundationof China (69875 0 0 9)
文摘We have constructed a compactly supported biorthogonal wavelet that approximates the modulation transfer function (MTF) of human visual system in the frequency domain. In this paper, we evaluate performance of the constructed wavelet, and compare it with the widely used Daubechies 9 7, Daubechies 9 3 and GBCW 9 7 wavelets. The result shows that coding performance of the constructed wavelet is better than Daubechies 9 3, and is competitive with Daubechies 9 7 and GBCW 9 7 wavelets. Like Daubechies 9 3 wavelet, the filter coefficients of the constructed wavelet are all dyadic fractions, and the tap is less than Daubechies 9 7 and GBCW 9 7. It has an attractive feature in the realization of discrete wavelet transform.
文摘There has been a lot of research has been performed regarding diagnosing rolling element bearing faults using wavelet analysis, but almost all methods are not ideal for picking up fault signal characteristics under strong noise. Therefore, this paper proposes auto-correlation, cross-correlation and weighted average fault diagnosis methods based on wavelet transform (WT) de-noising which combine correlation analysis with WT for the first time. These three methods compute the auto-correlation, the cross-correlation and the weighted average of the measured vibration signals, then de-noise by thresholding and computing the auto-correlation of de-noised coefficients of WT and FFT of energy sequence. The simulation results indicate that all methods enhance the capabilities of fault diagnosis of rolling bearings and pick up the fault characteristics effectively.
基金Sponsored by the National 985 Project Foundation of China
文摘A new model identification method of hydraulic flight simulator adopting improved panicle swarm optimization (PSO) and wavelet analysis is proposed for achieving higher identification precision. Input-output data of hydraulic flight simulator were decomposed by wavelet muhiresolution to get the information of different frequency bands. The reconstructed input-output data were used to build the model of hydraulic flight simulator with improved particle swarm optimization with mutation (IPSOM) to avoid the premature convergence of traditional optimization techniques effectively. Simulation results show that the proposed method is more precise than traditional system identification methods in operating frequency bands because of the consideration of design index of control system for identification.
文摘Linear Discriminant Analysis (LDA) is one of the principal techniques used in face recognition systems. LDA is well-known scheme for feature extraction and dimension reduction. It provides improved performance over the standard Principal Component Analysis (PCA) method of face recognition by introducing the concept of classes and distance between classes. This paper provides an overview of PCA, the various variants of LDA and their basic drawbacks. The paper also has proposed a development over classical LDA, i.e., LDA using wavelets transform approach that enhances performance as regards accuracy and time complexity. Experiments on ORL face database clearly demonstrate this and the graphical comparison of the algorithms clearly showcases the improved recognition rate in case of the proposed algorithm.