To improve the performance of composite pseudo-noise (PN) code clock recovery in a regenerative PN ranging system at a low symbol signal-to-noise ratio (SNR), a novel chip tracking loop (CTL) used for regenerati...To improve the performance of composite pseudo-noise (PN) code clock recovery in a regenerative PN ranging system at a low symbol signal-to-noise ratio (SNR), a novel chip tracking loop (CTL) used for regenerative PN ranging clock recovery is adopted. The CTL is a modified data transition tracking loop (DTTL). The difference between them is that the Q channel output of the CTL is directly multiplied by a clock component, while that of the DTTL is multiplied by the Ⅰ channel transition detector output. Under the condition of a quasi-squareware PN ranging code, the tracking ( mean square timing jitter) performance of the CTL is analyzed. The tracking performances of the CTL and the DTTL, are compared over a wide range of symbol SNRs. The result shows that the CTL and the DTTL have the same performance at a large symbol SNR, while at a low symbol SNR, the former offers a noticeable enhancement.展开更多
The merging of a panchromatic (PAN) image with a multispectral satellite image (MSI) to increase the spatial resolution of the MSI, while simultaneously preserving its spectral information is classically referred as P...The merging of a panchromatic (PAN) image with a multispectral satellite image (MSI) to increase the spatial resolution of the MSI, while simultaneously preserving its spectral information is classically referred as PAN-sharpening. We employed a recent dataset derived from very high resolution of WorldView-2 satellite (PAN and MSI) for two test sites (one over an urban area and the other over Antarctica), to comprehensively evaluate the performance of six existing PAN-sharpening algorithms. The algorithms under consideration were the Gram-Schmidt (GS), Ehlers fusion (EF), modified hue-intensity-saturation (Mod-HIS), high pass filtering (HPF), the Brovey transform (BT), and wavelet-based principal component analysis (W-PC). Quality assessment of the sharpened images was carried out by using 20 quality indices. We also analyzed the performance of nearest neighbour (NN), bilinear interpolation (BI), and cubic convolution (CC) resampling methods to test their practicability in the PAN-sharpening process. Our results indicate that the comprehensive performance of PAN-sharpening methods decreased in the following order: GS > W-PC > EF > HPF > Mod-HIS > BT, while resampling methods followed the order: NN > BI > CC.展开更多
In this paper, we describe resourceefficient hardware architectures for softwaredefined radio (SDR) frontends. These architectures are made efficient by using a polyphase channelizer that performs arbitrary sample r...In this paper, we describe resourceefficient hardware architectures for softwaredefined radio (SDR) frontends. These architectures are made efficient by using a polyphase channelizer that performs arbitrary sample rate changes, frequency selection, and bandwidth control. We discuss area, time, and power optimization for field programmable gate array (FPGA) based architectures in an Mpath polyphase filter bank with modified Npath polyphase filter. Such systems allow resampling by arbitrary ratios while simultaneously performing baseband aliasing from center frequencies at Nyquist zones that are not multiples of the output sample rate. A nonmaximally decimated polyphase filter bank, where the number of data loads is not equal to the number of M subfilters, processes M subfilters in a time period that is either less than or greater than the Mdataload ' s time period. We present a loadprocess architecture (LPA) and a runtime architecture (RA) (based on serial polyphase structure) which have different scheduling. In LPA, Nsubfilters are loaded, and then M subfilters are processed at a clock rate that is a multiple of the input data rate. This is necessary to meet the output time constraint of the down-sampled data. In RA, Msubfilters processes are efficiently scheduled within Ndataload time while simultaneously loading N subfilters. This requires reduced clock rates compared with LPA, and potentially less power is consumed. A polyphase filter bank that uses different resampling factors for maximally decimated, underdecimated, overdecimated, and combined upand downsampled scenarios is used as a case study, and an analysis of area, time, and power for their FPGA architectures is given. For resourceoptimized SDR frontends, RA is superior for reducing operating clock rates and dynamic power consumption. RA is also superior for reducing area resources, except when indices are prestored in LUTs.展开更多
In order to address the issues of traditional resampling algorithms involving computational accuracy and efficiency in rolling element bearing fault diagnosis, an equal division impulse-based(EDI-based) resampling a...In order to address the issues of traditional resampling algorithms involving computational accuracy and efficiency in rolling element bearing fault diagnosis, an equal division impulse-based(EDI-based) resampling algorithm is proposed. First, the time marks of every rising edge of the rotating speed pulse and the corresponding amplitudes of faulty bearing vibration signal are determined. Then, every adjacent the rotating pulse is divided equally, and the time marks in every adjacent rotating speed pulses and the corresponding amplitudes of vibration signal are obtained by the interpolation algorithm. Finally, all the time marks and the corresponding amplitudes of vibration signal are arranged and the time marks are transformed into the angle domain to obtain the resampling signal. Speed-up and speed-down faulty bearing signals are employed to verify the validity of the proposed method, and experimental results show that the proposed method is effective for diagnosing faulty bearings. Furthermore, the traditional order tracking techniques are applied to the experimental bearing signals, and the results show that the proposed method produces higher accurate outcomes in less computation time.展开更多
Object tracking with abrupt motion is an important research topic and has attracted wide attention.To obtain accurate tracking results,an improved particle filter tracking algorithm based on sparse representation and ...Object tracking with abrupt motion is an important research topic and has attracted wide attention.To obtain accurate tracking results,an improved particle filter tracking algorithm based on sparse representation and nonlinear resampling is proposed in this paper. First,the sparse representation is used to compute particle weights by considering the fact that the weights are sparse when the object moves abruptly,so the potential object region can be predicted more precisely. Then,a nonlinear resampling process is proposed by utilizing the nonlinear sorting strategy,which can solve the problem of particle diversity impoverishment caused by traditional resampling methods. Experimental results based on videos containing objects with various abrupt motions have demonstrated the effectiveness of the proposed algorithm.展开更多
In order to deal with the particle degeneracy and impov- erishment problems existed in particle filters, a modified sequential importance resampling (MSIR) filter is proposed. In this filter, the resampling is trans...In order to deal with the particle degeneracy and impov- erishment problems existed in particle filters, a modified sequential importance resampling (MSIR) filter is proposed. In this filter, the resampling is translated into an evolutional process just like the biological evolution. A particle generator is constructed, which introduces the current measurement information (CMI) into the resampled particles. In the evolution, new particles are first pro- duced through the particle generator, each of which is essentially an unbiased estimation of the current true state. Then, new and old particles are recombined for the sake of raising the diversity among the particles. Finally, those particles who have low quality are eliminated. Through the evolution, all the particles retained are regarded as the optimal ones, and these particles are utilized to update the current state. By using the proposed resampling approach, not only the CMI is incorporated into each resampled particle, but also the particle degeneracy and the loss of diver- sity among the particles are mitigated, resulting in the improved estimation accuracy. Simulation results show the superiorities of the proposed filter over the standard sequential importance re- sampling (SIR) filter, auxiliary particle filter and unscented Kalman particle filter.展开更多
A high-precision pseudo-noise ranging system is often required in satellite-formation missions. But in an actual PN ranging system, digital signal processing limits the ranging accuracy, only level up with meter-scale...A high-precision pseudo-noise ranging system is often required in satellite-formation missions. But in an actual PN ranging system, digital signal processing limits the ranging accuracy, only level up with meter-scale. Using non-integer chip to sample time ratio, noncommensurate sampling was seen as an effective solution to cope with the drawback of digital effects. However, researchers only paid attention to selecting specific ratios or giving a simulation model to verify the effectiveness of the noncommensurate ratios. A qualitative analysis model is proposed to characterize the relationship between the range accuracy and the noncommensurate sampling parameters. Moreover, a method is also presented which can be used to choose the noncommensurate ratio and the correlation length to get higher phase delay distinguishability and lower range jitter. The simulation results indicate the correctness of our analyses and the optimal ranging accuracy can be up to centimeter-level with the proposed approach.展开更多
An efficient resampling reliability approach was developed to consider the effect of statistical uncertainties in input properties arising due to insufficient data when estimating the reliability of rock slopes and tu...An efficient resampling reliability approach was developed to consider the effect of statistical uncertainties in input properties arising due to insufficient data when estimating the reliability of rock slopes and tunnels.This approach considers the effect of uncertainties in both distribution parameters(mean and standard deviation)and types of input properties.Further,the approach was generalized to make it capable of analyzing complex problems with explicit/implicit performance functions(PFs),single/multiple PFs,and correlated/non-correlated input properties.It couples resampling statistical tool,i.e.jackknife,with advanced reliability tools like Latin hypercube sampling(LHS),Sobol’s global sensitivity,moving least square-response surface method(MLS-RSM),and Nataf’s transformation.The developed approach was demonstrated for four cases encompassing different types.Results were compared with a recently developed bootstrap-based resampling reliability approach.The results show that the approach is accurate and significantly efficient compared with the bootstrap-based approach.The proposed approach reflects the effect of statistical uncertainties of input properties by estimating distributions/confidence intervals of reliability index/probability of failure(s)instead of their fixed-point estimates.Further,sufficiently accurate results were obtained by considering uncertainties in distribution parameters only and ignoring those in distribution types.展开更多
Frame and frequency synchronization are essential for orthogonal frequency division multiplexing (OFDM) systems. The frame offset owing to incorrect start point position of the fast Fourier transform (FFT) window,...Frame and frequency synchronization are essential for orthogonal frequency division multiplexing (OFDM) systems. The frame offset owing to incorrect start point position of the fast Fourier transform (FFT) window, and the carrier frequency offset (CFO) due to Doppler frequency shift or the frequency mismatch between the transmitter and receiver oscil ators, can bring severe inter-symbol interference (ISI) and inter-carrier interference (ICI) for the OFDM system. Relying on the relatively good correlation charac-teristic of the pseudo-noise (PN) sequence, a joint frame offset and normalized CFO estimation algorithm based on PN preamble in time domain is developed to realize the frame and frequency synchronization in the OFDM system. By comparison, the perfor-mances of the traditional algorithm and the improved algorithm are simulated under different conditions. The results indicate that the PN preamble based algorithm both in frame offset estimation and CFO estimation is more accurate, resource-saving and robust even under poor channel condition, such as low signal-to-noise ratio (SNR) and large normalized CFO.展开更多
The estimation of image resampling factors is an important problem in image forensics.Among all the resampling factor estimation methods,spectrumbased methods are one of the most widely used methods and have attracted...The estimation of image resampling factors is an important problem in image forensics.Among all the resampling factor estimation methods,spectrumbased methods are one of the most widely used methods and have attracted a lot of research interest.However,because of inherent ambiguity,spectrum-based methods fail to discriminate upscale and downscale operations without any prior information.In general,the application of resampling leaves detectable traces in both spatial domain and frequency domain of a resampled image.Firstly,the resampling process will introduce correlations between neighboring pixels.In this case,a set of periodic pixels that are correlated to their neighbors can be found in a resampled image.Secondly,the resampled image has distinct and strong peaks on spectrum while the spectrum of original image has no clear peaks.Hence,in this paper,we propose a dual-stream convolutional neural network for image resampling factors estimation.One of the two streams is gray stream whose purpose is to extract resampling traces features directly from the rescaled images.The other is frequency stream that discovers the differences of spectrum between rescaled and original images.The features from two streams are then fused to construct a feature representation including the resampling traces left in spatial and frequency domain,which is later fed into softmax layer for resampling factor estimation.Experimental results show that the proposed method is effective on resampling factor estimation and outperforms some CNN-based methods.展开更多
The design, analysis and parallel implementation of particle filter(PF) were investigated. Firstly, to tackle the particle degeneracy problem in the PF, an iterated importance density function(IIDF) was proposed, wher...The design, analysis and parallel implementation of particle filter(PF) were investigated. Firstly, to tackle the particle degeneracy problem in the PF, an iterated importance density function(IIDF) was proposed, where a new term associating with the current measurement information(CMI) was introduced into the expression of the sampled particles. Through the repeated use of the least squares estimate, the CMI can be integrated into the sampling stage in an iterative manner, conducing to the greatly improved sampling quality. By running the IIDF, an iterated PF(IPF) can be obtained. Subsequently, a parallel resampling(PR) was proposed for the purpose of parallel implementation of IPF, whose main idea was the same as systematic resampling(SR) but performed differently. The PR directly used the integral part of the product of the particle weight and particle number as the number of times that a particle was replicated, and it simultaneously eliminated the particles with the smallest weights, which are the two key differences from the SR. The detailed implementation procedures on the graphics processing unit of IPF based on the PR were presented at last. The performance of the IPF, PR and their parallel implementations are illustrated via one-dimensional numerical simulation and practical application of passive radar target tracking.展开更多
In this paper, large sample properties of resampling tests of hypotheses on the population mean resampled according to the empirical likelihood and the Kullback-Leibler criteria are investigated. It is proved that und...In this paper, large sample properties of resampling tests of hypotheses on the population mean resampled according to the empirical likelihood and the Kullback-Leibler criteria are investigated. It is proved that under the null hypothesis both of them are superior to the classical one.展开更多
Two variants of systematic resampling (S-RS) are proposed to increase the diversity of particles and thereby improve the performance of particle filtering when it is utilized for detection in Bell Laboratories Layer...Two variants of systematic resampling (S-RS) are proposed to increase the diversity of particles and thereby improve the performance of particle filtering when it is utilized for detection in Bell Laboratories Layered Space-Time (BLAST) systems. In the first variant, Markov chain Monte Carlo transition is integrated in the S-RS procedure to increase the diversity of particles with large importance weights. In the second one, all particles are first partitioned into two sets according to their importance weights, and then a double S-RS is introduced to increase the diversity of particles with small importance weights. Simulation results show that both variants can improve the bit error performance efficiently compared with the standard S-P^S with little increased complexity.展开更多
Cryptocurrency blockchain data encounter a class-imbalance problem due to only a few known labels of illicit or fraudulent activities in the blockchain network.For this purpose,we seek to compare various resampling me...Cryptocurrency blockchain data encounter a class-imbalance problem due to only a few known labels of illicit or fraudulent activities in the blockchain network.For this purpose,we seek to compare various resampling methods applied to two highly imbalanced datasets derived from the blockchain of Bitcoin and Ethereum after further dimensionality reductions,which is different from previous studies on these datasets.Firstly,we study the performance of various classical supervised learning methods to classify illicit transactions or accounts on Bitcoin or Ethereum datasets,respectively.Consequently,we apply various resampling techniques to these datasets using the best performing learning algorithm on each of these datasets.Subsequently,we study the feature importance of the given models,wherein the resampled datasets directly influenced on the explainability of the model.Our main finding is that undersampling using the edited nearest-neighbour technique has attained an accuracy of more than 99%on the given datasets by removing the noisy data points from the whole dataset.Moreover,the best-performing learning algorithms have shown superior performance after feature reduction on these datasets in comparison to their original studies.The matchless contribution lies in discussing the effect of the data resampling on feature importance which is interconnected with explainable artificial intelligence(XAI)techniques.展开更多
With the rapid progress of the image processing software, the image forgery can leave no visual clues on the tampered regions and make us unable to authenticate the image. In general, the image forgery technologies of...With the rapid progress of the image processing software, the image forgery can leave no visual clues on the tampered regions and make us unable to authenticate the image. In general, the image forgery technologies often utilizes the scaling, rotation or skewing operations to tamper some regions in the image, in which the resampling and interpolation processes are often demanded. By observing the detectable periodic distribution properties generated from the resampling and interpolation processes, we propose a novel method based on the intrinsic properties of resampling scheme to detect the tampered regions. The proposed method applies the pre-calculated resampling weighting table to detect the periodic properties of prediction error distribution. The experimental results show that the proposed method outperforms the conventional methods in terms of efficiency and accuracy.展开更多
Speech resampling is a typical tempering behavior,which is often integrated into various speech forgeries,such as splicing,electronic disguising,quality faking and so on.By analyzing the principle of resampling,we fou...Speech resampling is a typical tempering behavior,which is often integrated into various speech forgeries,such as splicing,electronic disguising,quality faking and so on.By analyzing the principle of resampling,we found that,compared with natural speech,the inconsistency between the bandwidth of the resampled speech and its sampling ratio will be caused because the interpolation process in resampling is imperfect.Based on our observation,a new resampling detection algorithm based on the inconsistency of band energy is proposed.First,according to the sampling ratio of the suspected speech,a band-pass Butterworth filter is designed to filter out the residual signal.Then,the logarithmic ratio of band energy is calculated by the suspected speech and the filtered speech.Finally,with the logarithmic ratio,the resampled and original speech can be discriminated.The experimental results show that the proposed algorithm can effectively detect the resampling behavior under various conditions and is robust to MP3 compression.展开更多
Neural network methods have been widely used in many fields of scientific research with the rapid increase of computing power.The physics-informed neural networks(PINNs)have received much attention as a major breakthr...Neural network methods have been widely used in many fields of scientific research with the rapid increase of computing power.The physics-informed neural networks(PINNs)have received much attention as a major breakthrough in solving partial differential equations using neural networks.In this paper,a resampling technique based on the expansion-shrinkage point(ESP)selection strategy is developed to dynamically modify the distribution of training points in accordance with the performance of the neural networks.In this new approach both training sites with slight changes in residual values and training points with large residuals are taken into account.In order to make the distribution of training points more uniform,the concept of continuity is further introduced and incorporated.This method successfully addresses the issue that the neural network becomes ill or even crashes due to the extensive alteration of training point distribution.The effectiveness of the improved physics-informed neural networks with expansion-shrinkage resampling is demonstrated through a series of numerical experiments.展开更多
This paper proposes a resampling simulator that will calculate probabilities of detecting invasive species infesting hosts that occur in large numbers. Different methods were examined to determine the bias of observed...This paper proposes a resampling simulator that will calculate probabilities of detecting invasive species infesting hosts that occur in large numbers. Different methods were examined to determine the bias of observed cumulative distribution functions (c.d.f.s), generated from prototype resampling simulators. One involved seeing if they matched theoretical c.d.f.s, which were generated using formulae for calculating the probability of the union of many events (union formulae), which are known to be correct. Others involved assessing the bias of observed c.d.f.s, generated from using prototype resampling simulators operating on much larger simulated populations, when computation of theoretical c.d.f.s from the union formulae was not practical. Examples are given for using the proposed resampling simulator for detecting an invasive insect pest within the context of an invasive species management system.展开更多
Free energy calculations may provide vital information for studying various chemical and biological processes.Quantum mechanical methods are required to accurately describe interaction energies,but their computations ...Free energy calculations may provide vital information for studying various chemical and biological processes.Quantum mechanical methods are required to accurately describe interaction energies,but their computations are often too demanding for conformational sampling.As a remedy,level correction schemes that allow calculating high level free energies based on conformations from lower level simulations have been developed.Here,we present a variation of a Monte Carlo(MC)resampling approach in relation to the weighted histogram analysis method(WHAM).We show that our scheme can generate free energy surfaces that can practically converge to the exact one with sufficient sampling,and that it treats cases with insufficient sampling in a more stable manner than the conventional WHAM-based level correction scheme.It can also provide a guide for checking the uncertainty of the levelcorrected surface and a well-defined criterion for deciding the extent of smoothing on the free energy surface for its visual improvement.We demonstrate these aspects by obtaining the free energy maps associated with the alanine dipeptide and proton transfer network of the KillerRed protein in explicit water,and exemplify that the MC resampled WHAM scheme can be a practical tool for producing free energy surfaces of realistic systems.展开更多
An idea of estimating the direct sequence spread spectrum(DSSS) signal pseudo-noise(PN) sequence is presented. Without the apriority knowledge about the DSSS signal in the non-cooperation condition, we propose a s...An idea of estimating the direct sequence spread spectrum(DSSS) signal pseudo-noise(PN) sequence is presented. Without the apriority knowledge about the DSSS signal in the non-cooperation condition, we propose a self-organizing feature map(SOFM) neural network algorithm to detect and identify the PN sequence. A non-supervised learning algorithm is proposed according the Kohonen rule in SOFM. The blind algorithm can also estimate the PN sequence in a low signal-to-noise(SNR) and computer simulation demonstrates that the algorithm is effective. Compared with the traditional correlation algorithm based on slip-correlation, the proposed algorithm's bit error rate(BER) and complexity are lower.展开更多
文摘To improve the performance of composite pseudo-noise (PN) code clock recovery in a regenerative PN ranging system at a low symbol signal-to-noise ratio (SNR), a novel chip tracking loop (CTL) used for regenerative PN ranging clock recovery is adopted. The CTL is a modified data transition tracking loop (DTTL). The difference between them is that the Q channel output of the CTL is directly multiplied by a clock component, while that of the DTTL is multiplied by the Ⅰ channel transition detector output. Under the condition of a quasi-squareware PN ranging code, the tracking ( mean square timing jitter) performance of the CTL is analyzed. The tracking performances of the CTL and the DTTL, are compared over a wide range of symbol SNRs. The result shows that the CTL and the DTTL have the same performance at a large symbol SNR, while at a low symbol SNR, the former offers a noticeable enhancement.
文摘The merging of a panchromatic (PAN) image with a multispectral satellite image (MSI) to increase the spatial resolution of the MSI, while simultaneously preserving its spectral information is classically referred as PAN-sharpening. We employed a recent dataset derived from very high resolution of WorldView-2 satellite (PAN and MSI) for two test sites (one over an urban area and the other over Antarctica), to comprehensively evaluate the performance of six existing PAN-sharpening algorithms. The algorithms under consideration were the Gram-Schmidt (GS), Ehlers fusion (EF), modified hue-intensity-saturation (Mod-HIS), high pass filtering (HPF), the Brovey transform (BT), and wavelet-based principal component analysis (W-PC). Quality assessment of the sharpened images was carried out by using 20 quality indices. We also analyzed the performance of nearest neighbour (NN), bilinear interpolation (BI), and cubic convolution (CC) resampling methods to test their practicability in the PAN-sharpening process. Our results indicate that the comprehensive performance of PAN-sharpening methods decreased in the following order: GS > W-PC > EF > HPF > Mod-HIS > BT, while resampling methods followed the order: NN > BI > CC.
文摘In this paper, we describe resourceefficient hardware architectures for softwaredefined radio (SDR) frontends. These architectures are made efficient by using a polyphase channelizer that performs arbitrary sample rate changes, frequency selection, and bandwidth control. We discuss area, time, and power optimization for field programmable gate array (FPGA) based architectures in an Mpath polyphase filter bank with modified Npath polyphase filter. Such systems allow resampling by arbitrary ratios while simultaneously performing baseband aliasing from center frequencies at Nyquist zones that are not multiples of the output sample rate. A nonmaximally decimated polyphase filter bank, where the number of data loads is not equal to the number of M subfilters, processes M subfilters in a time period that is either less than or greater than the Mdataload ' s time period. We present a loadprocess architecture (LPA) and a runtime architecture (RA) (based on serial polyphase structure) which have different scheduling. In LPA, Nsubfilters are loaded, and then M subfilters are processed at a clock rate that is a multiple of the input data rate. This is necessary to meet the output time constraint of the down-sampled data. In RA, Msubfilters processes are efficiently scheduled within Ndataload time while simultaneously loading N subfilters. This requires reduced clock rates compared with LPA, and potentially less power is consumed. A polyphase filter bank that uses different resampling factors for maximally decimated, underdecimated, overdecimated, and combined upand downsampled scenarios is used as a case study, and an analysis of area, time, and power for their FPGA architectures is given. For resourceoptimized SDR frontends, RA is superior for reducing operating clock rates and dynamic power consumption. RA is also superior for reducing area resources, except when indices are prestored in LUTs.
基金Fundamental Research Funds for the Central Universities(No.2016JBM051)
文摘In order to address the issues of traditional resampling algorithms involving computational accuracy and efficiency in rolling element bearing fault diagnosis, an equal division impulse-based(EDI-based) resampling algorithm is proposed. First, the time marks of every rising edge of the rotating speed pulse and the corresponding amplitudes of faulty bearing vibration signal are determined. Then, every adjacent the rotating pulse is divided equally, and the time marks in every adjacent rotating speed pulses and the corresponding amplitudes of vibration signal are obtained by the interpolation algorithm. Finally, all the time marks and the corresponding amplitudes of vibration signal are arranged and the time marks are transformed into the angle domain to obtain the resampling signal. Speed-up and speed-down faulty bearing signals are employed to verify the validity of the proposed method, and experimental results show that the proposed method is effective for diagnosing faulty bearings. Furthermore, the traditional order tracking techniques are applied to the experimental bearing signals, and the results show that the proposed method produces higher accurate outcomes in less computation time.
基金Supported by the National Natural Science Foundation of China(61701029)
文摘Object tracking with abrupt motion is an important research topic and has attracted wide attention.To obtain accurate tracking results,an improved particle filter tracking algorithm based on sparse representation and nonlinear resampling is proposed in this paper. First,the sparse representation is used to compute particle weights by considering the fact that the weights are sparse when the object moves abruptly,so the potential object region can be predicted more precisely. Then,a nonlinear resampling process is proposed by utilizing the nonlinear sorting strategy,which can solve the problem of particle diversity impoverishment caused by traditional resampling methods. Experimental results based on videos containing objects with various abrupt motions have demonstrated the effectiveness of the proposed algorithm.
基金supported by the National Natural Science Foundation of China(61372136)
文摘In order to deal with the particle degeneracy and impov- erishment problems existed in particle filters, a modified sequential importance resampling (MSIR) filter is proposed. In this filter, the resampling is translated into an evolutional process just like the biological evolution. A particle generator is constructed, which introduces the current measurement information (CMI) into the resampled particles. In the evolution, new particles are first pro- duced through the particle generator, each of which is essentially an unbiased estimation of the current true state. Then, new and old particles are recombined for the sake of raising the diversity among the particles. Finally, those particles who have low quality are eliminated. Through the evolution, all the particles retained are regarded as the optimal ones, and these particles are utilized to update the current state. By using the proposed resampling approach, not only the CMI is incorporated into each resampled particle, but also the particle degeneracy and the loss of diver- sity among the particles are mitigated, resulting in the improved estimation accuracy. Simulation results show the superiorities of the proposed filter over the standard sequential importance re- sampling (SIR) filter, auxiliary particle filter and unscented Kalman particle filter.
基金Project(60904090) supported by the National Natural Science Foundation of China
文摘A high-precision pseudo-noise ranging system is often required in satellite-formation missions. But in an actual PN ranging system, digital signal processing limits the ranging accuracy, only level up with meter-scale. Using non-integer chip to sample time ratio, noncommensurate sampling was seen as an effective solution to cope with the drawback of digital effects. However, researchers only paid attention to selecting specific ratios or giving a simulation model to verify the effectiveness of the noncommensurate ratios. A qualitative analysis model is proposed to characterize the relationship between the range accuracy and the noncommensurate sampling parameters. Moreover, a method is also presented which can be used to choose the noncommensurate ratio and the correlation length to get higher phase delay distinguishability and lower range jitter. The simulation results indicate the correctness of our analyses and the optimal ranging accuracy can be up to centimeter-level with the proposed approach.
文摘An efficient resampling reliability approach was developed to consider the effect of statistical uncertainties in input properties arising due to insufficient data when estimating the reliability of rock slopes and tunnels.This approach considers the effect of uncertainties in both distribution parameters(mean and standard deviation)and types of input properties.Further,the approach was generalized to make it capable of analyzing complex problems with explicit/implicit performance functions(PFs),single/multiple PFs,and correlated/non-correlated input properties.It couples resampling statistical tool,i.e.jackknife,with advanced reliability tools like Latin hypercube sampling(LHS),Sobol’s global sensitivity,moving least square-response surface method(MLS-RSM),and Nataf’s transformation.The developed approach was demonstrated for four cases encompassing different types.Results were compared with a recently developed bootstrap-based resampling reliability approach.The results show that the approach is accurate and significantly efficient compared with the bootstrap-based approach.The proposed approach reflects the effect of statistical uncertainties of input properties by estimating distributions/confidence intervals of reliability index/probability of failure(s)instead of their fixed-point estimates.Further,sufficiently accurate results were obtained by considering uncertainties in distribution parameters only and ignoring those in distribution types.
基金supported by the National Natural Science Foundation of China(6130110561102069)+2 种基金the China Postdoctoral Science Foundation Funded Project(2013M531351)the Nanjing University of Aeronautics and Astronautics Founding(NN2012022)the Open Fund of Graduate Innovated Base(Laboratory)for the Nanjing University of Aeronautics and Astronautics(KFJJ120219)
文摘Frame and frequency synchronization are essential for orthogonal frequency division multiplexing (OFDM) systems. The frame offset owing to incorrect start point position of the fast Fourier transform (FFT) window, and the carrier frequency offset (CFO) due to Doppler frequency shift or the frequency mismatch between the transmitter and receiver oscil ators, can bring severe inter-symbol interference (ISI) and inter-carrier interference (ICI) for the OFDM system. Relying on the relatively good correlation charac-teristic of the pseudo-noise (PN) sequence, a joint frame offset and normalized CFO estimation algorithm based on PN preamble in time domain is developed to realize the frame and frequency synchronization in the OFDM system. By comparison, the perfor-mances of the traditional algorithm and the improved algorithm are simulated under different conditions. The results indicate that the PN preamble based algorithm both in frame offset estimation and CFO estimation is more accurate, resource-saving and robust even under poor channel condition, such as low signal-to-noise ratio (SNR) and large normalized CFO.
基金the National Natural Science Foundation of China(No.62072480)the Key Areas R&D Program of Guangdong(No.2019B010136002)the Key ScientificResearch Program of Guangzhou(No.201804020068).
文摘The estimation of image resampling factors is an important problem in image forensics.Among all the resampling factor estimation methods,spectrumbased methods are one of the most widely used methods and have attracted a lot of research interest.However,because of inherent ambiguity,spectrum-based methods fail to discriminate upscale and downscale operations without any prior information.In general,the application of resampling leaves detectable traces in both spatial domain and frequency domain of a resampled image.Firstly,the resampling process will introduce correlations between neighboring pixels.In this case,a set of periodic pixels that are correlated to their neighbors can be found in a resampled image.Secondly,the resampled image has distinct and strong peaks on spectrum while the spectrum of original image has no clear peaks.Hence,in this paper,we propose a dual-stream convolutional neural network for image resampling factors estimation.One of the two streams is gray stream whose purpose is to extract resampling traces features directly from the rescaled images.The other is frequency stream that discovers the differences of spectrum between rescaled and original images.The features from two streams are then fused to construct a feature representation including the resampling traces left in spatial and frequency domain,which is later fed into softmax layer for resampling factor estimation.Experimental results show that the proposed method is effective on resampling factor estimation and outperforms some CNN-based methods.
基金Project(61372136) supported by the National Natural Science Foundation of China
文摘The design, analysis and parallel implementation of particle filter(PF) were investigated. Firstly, to tackle the particle degeneracy problem in the PF, an iterated importance density function(IIDF) was proposed, where a new term associating with the current measurement information(CMI) was introduced into the expression of the sampled particles. Through the repeated use of the least squares estimate, the CMI can be integrated into the sampling stage in an iterative manner, conducing to the greatly improved sampling quality. By running the IIDF, an iterated PF(IPF) can be obtained. Subsequently, a parallel resampling(PR) was proposed for the purpose of parallel implementation of IPF, whose main idea was the same as systematic resampling(SR) but performed differently. The PR directly used the integral part of the product of the particle weight and particle number as the number of times that a particle was replicated, and it simultaneously eliminated the particles with the smallest weights, which are the two key differences from the SR. The detailed implementation procedures on the graphics processing unit of IPF based on the PR were presented at last. The performance of the IPF, PR and their parallel implementations are illustrated via one-dimensional numerical simulation and practical application of passive radar target tracking.
文摘In this paper, large sample properties of resampling tests of hypotheses on the population mean resampled according to the empirical likelihood and the Kullback-Leibler criteria are investigated. It is proved that under the null hypothesis both of them are superior to the classical one.
基金supported by the National Natural Science Foundation of China(6047209860502046U0635003).
文摘Two variants of systematic resampling (S-RS) are proposed to increase the diversity of particles and thereby improve the performance of particle filtering when it is utilized for detection in Bell Laboratories Layered Space-Time (BLAST) systems. In the first variant, Markov chain Monte Carlo transition is integrated in the S-RS procedure to increase the diversity of particles with large importance weights. In the second one, all particles are first partitioned into two sets according to their importance weights, and then a double S-RS is introduced to increase the diversity of particles with small importance weights. Simulation results show that both variants can improve the bit error performance efficiently compared with the standard S-P^S with little increased complexity.
文摘Cryptocurrency blockchain data encounter a class-imbalance problem due to only a few known labels of illicit or fraudulent activities in the blockchain network.For this purpose,we seek to compare various resampling methods applied to two highly imbalanced datasets derived from the blockchain of Bitcoin and Ethereum after further dimensionality reductions,which is different from previous studies on these datasets.Firstly,we study the performance of various classical supervised learning methods to classify illicit transactions or accounts on Bitcoin or Ethereum datasets,respectively.Consequently,we apply various resampling techniques to these datasets using the best performing learning algorithm on each of these datasets.Subsequently,we study the feature importance of the given models,wherein the resampled datasets directly influenced on the explainability of the model.Our main finding is that undersampling using the edited nearest-neighbour technique has attained an accuracy of more than 99%on the given datasets by removing the noisy data points from the whole dataset.Moreover,the best-performing learning algorithms have shown superior performance after feature reduction on these datasets in comparison to their original studies.The matchless contribution lies in discussing the effect of the data resampling on feature importance which is interconnected with explainable artificial intelligence(XAI)techniques.
文摘With the rapid progress of the image processing software, the image forgery can leave no visual clues on the tampered regions and make us unable to authenticate the image. In general, the image forgery technologies often utilizes the scaling, rotation or skewing operations to tamper some regions in the image, in which the resampling and interpolation processes are often demanded. By observing the detectable periodic distribution properties generated from the resampling and interpolation processes, we propose a novel method based on the intrinsic properties of resampling scheme to detect the tampered regions. The proposed method applies the pre-calculated resampling weighting table to detect the periodic properties of prediction error distribution. The experimental results show that the proposed method outperforms the conventional methods in terms of efficiency and accuracy.
基金This work was supported by the National Natural Science Foundation of China(Grant No.61300055,U1736215,61672302)Zhejiang Natural Science Foundation(Grant No.LY17F020010,LZ15F020002)+1 种基金Ningbo Natural Science Foundation(Grant No.2017A610123)Ningbo University Fund(Grant No.XKXL1509,XKXL1503)and K.C.Wong Magna Fund in Ningbo University.
文摘Speech resampling is a typical tempering behavior,which is often integrated into various speech forgeries,such as splicing,electronic disguising,quality faking and so on.By analyzing the principle of resampling,we found that,compared with natural speech,the inconsistency between the bandwidth of the resampled speech and its sampling ratio will be caused because the interpolation process in resampling is imperfect.Based on our observation,a new resampling detection algorithm based on the inconsistency of band energy is proposed.First,according to the sampling ratio of the suspected speech,a band-pass Butterworth filter is designed to filter out the residual signal.Then,the logarithmic ratio of band energy is calculated by the suspected speech and the filtered speech.Finally,with the logarithmic ratio,the resampled and original speech can be discriminated.The experimental results show that the proposed algorithm can effectively detect the resampling behavior under various conditions and is robust to MP3 compression.
基金Project supported by the National Key Research and Development Program of China(Grant No.2020YFC1807905)the National Natural Science Foundation of China(Grant Nos.52079090 and U20A20316)the Basic Research Program of Qinghai Province(Grant No.2022-ZJ-704).
文摘Neural network methods have been widely used in many fields of scientific research with the rapid increase of computing power.The physics-informed neural networks(PINNs)have received much attention as a major breakthrough in solving partial differential equations using neural networks.In this paper,a resampling technique based on the expansion-shrinkage point(ESP)selection strategy is developed to dynamically modify the distribution of training points in accordance with the performance of the neural networks.In this new approach both training sites with slight changes in residual values and training points with large residuals are taken into account.In order to make the distribution of training points more uniform,the concept of continuity is further introduced and incorporated.This method successfully addresses the issue that the neural network becomes ill or even crashes due to the extensive alteration of training point distribution.The effectiveness of the improved physics-informed neural networks with expansion-shrinkage resampling is demonstrated through a series of numerical experiments.
文摘This paper proposes a resampling simulator that will calculate probabilities of detecting invasive species infesting hosts that occur in large numbers. Different methods were examined to determine the bias of observed cumulative distribution functions (c.d.f.s), generated from prototype resampling simulators. One involved seeing if they matched theoretical c.d.f.s, which were generated using formulae for calculating the probability of the union of many events (union formulae), which are known to be correct. Others involved assessing the bias of observed c.d.f.s, generated from using prototype resampling simulators operating on much larger simulated populations, when computation of theoretical c.d.f.s from the union formulae was not practical. Examples are given for using the proposed resampling simulator for detecting an invasive insect pest within the context of an invasive species management system.
基金supported by the Mid-career Researcher Program(No.2017R1A2B3004946)through National Research Foundationfunded by Ministry of Science and ICT of Korea.
文摘Free energy calculations may provide vital information for studying various chemical and biological processes.Quantum mechanical methods are required to accurately describe interaction energies,but their computations are often too demanding for conformational sampling.As a remedy,level correction schemes that allow calculating high level free energies based on conformations from lower level simulations have been developed.Here,we present a variation of a Monte Carlo(MC)resampling approach in relation to the weighted histogram analysis method(WHAM).We show that our scheme can generate free energy surfaces that can practically converge to the exact one with sufficient sampling,and that it treats cases with insufficient sampling in a more stable manner than the conventional WHAM-based level correction scheme.It can also provide a guide for checking the uncertainty of the levelcorrected surface and a well-defined criterion for deciding the extent of smoothing on the free energy surface for its visual improvement.We demonstrate these aspects by obtaining the free energy maps associated with the alanine dipeptide and proton transfer network of the KillerRed protein in explicit water,and exemplify that the MC resampled WHAM scheme can be a practical tool for producing free energy surfaces of realistic systems.
基金supported by the National Natural Science Foundation of China under Grant No.61271168
文摘An idea of estimating the direct sequence spread spectrum(DSSS) signal pseudo-noise(PN) sequence is presented. Without the apriority knowledge about the DSSS signal in the non-cooperation condition, we propose a self-organizing feature map(SOFM) neural network algorithm to detect and identify the PN sequence. A non-supervised learning algorithm is proposed according the Kohonen rule in SOFM. The blind algorithm can also estimate the PN sequence in a low signal-to-noise(SNR) and computer simulation demonstrates that the algorithm is effective. Compared with the traditional correlation algorithm based on slip-correlation, the proposed algorithm's bit error rate(BER) and complexity are lower.