The nonuniform distribution of interference spectrum in wavenumber k-space is a key issue to limit the imaging quality of Fourier-domain optical coherence tomography(FD-OCT).At present,the reconstruction quality at di...The nonuniform distribution of interference spectrum in wavenumber k-space is a key issue to limit the imaging quality of Fourier-domain optical coherence tomography(FD-OCT).At present,the reconstruction quality at different depths among a variety of processing methods in k-space is still uncertain.Using simulated and experimental interference spectra at different depths,the effects of common six processing methods including uniform resampling(linear interpolation(LI),cubic spline interpolation(CSI),time-domain interpolation(TDI),and K-B window convolution)and nonuniform sampling direct-reconstruction(Lomb periodogram(LP)and nonuniform discrete Fourier transform(NDFT))on the reconstruction quality of FD-OCT were quantitatively analyzed and compared in this work.The results obtained by using simulated and experimental data were coincident.From the experimental results,the averaged peak intensity,axial resolution,and signal-to-noise ratio(SNR)of NDFT at depth from 0.5 to 3.0mm were improved by about 1.9 dB,1.4 times,and 11.8 dB,respectively,compared to the averaged indices of all the uniform resampling methods at all depths.Similarly,the improvements of the above three indices of LP were 2.0 dB,1.4 times,and 11.7 dB,respectively.The analysis method and the results obtained in this work are helpful to select an appropriate processing method in k-space,so as to improve the imaging quality of FD-OCT.展开更多
Neural network methods have been widely used in many fields of scientific research with the rapid increase of computing power.The physics-informed neural networks(PINNs)have received much attention as a major breakthr...Neural network methods have been widely used in many fields of scientific research with the rapid increase of computing power.The physics-informed neural networks(PINNs)have received much attention as a major breakthrough in solving partial differential equations using neural networks.In this paper,a resampling technique based on the expansion-shrinkage point(ESP)selection strategy is developed to dynamically modify the distribution of training points in accordance with the performance of the neural networks.In this new approach both training sites with slight changes in residual values and training points with large residuals are taken into account.In order to make the distribution of training points more uniform,the concept of continuity is further introduced and incorporated.This method successfully addresses the issue that the neural network becomes ill or even crashes due to the extensive alteration of training point distribution.The effectiveness of the improved physics-informed neural networks with expansion-shrinkage resampling is demonstrated through a series of numerical experiments.展开更多
Twitter has emerged as a platform that produces new data every day through its users which can be utilized for various purposes.People express their unique ideas and views onmultiple topics thus providing vast knowled...Twitter has emerged as a platform that produces new data every day through its users which can be utilized for various purposes.People express their unique ideas and views onmultiple topics thus providing vast knowledge.Sentiment analysis is critical from the corporate and political perspectives as it can impact decision-making.Since the proliferation of COVID-19,it has become an important challenge to detect the sentiment of COVID-19-related tweets so that people’s opinions can be tracked.The purpose of this research is to detect the sentiment of people regarding this problem with limited data as it can be challenging considering the various textual characteristics that must be analyzed.Hence,this research presents a deep learning-based model that utilizes the positives of random minority oversampling combined with class label analysis to achieve the best results for sentiment analysis.This research specifically focuses on utilizing class label analysis to deal with the multiclass problem by combining the class labels with a similar overall sentiment.This can be particularly helpful when dealing with smaller datasets.Furthermore,our proposed model integrates various preprocessing steps with random minority oversampling and various deep learning algorithms including standard deep learning and bi-directional deep learning algorithms.This research explores several algorithms and their impact on sentiment analysis tasks and concludes that bidirectional neural networks do not provide any advantage over standard neural networks as standard Neural Networks provide slightly better results than their bidirectional counterparts.The experimental results validate that our model offers excellent results with a validation accuracy of 92.5%and an F1 measure of 0.92.展开更多
In order to address the issues of traditional resampling algorithms involving computational accuracy and efficiency in rolling element bearing fault diagnosis, an equal division impulse-based(EDI-based) resampling a...In order to address the issues of traditional resampling algorithms involving computational accuracy and efficiency in rolling element bearing fault diagnosis, an equal division impulse-based(EDI-based) resampling algorithm is proposed. First, the time marks of every rising edge of the rotating speed pulse and the corresponding amplitudes of faulty bearing vibration signal are determined. Then, every adjacent the rotating pulse is divided equally, and the time marks in every adjacent rotating speed pulses and the corresponding amplitudes of vibration signal are obtained by the interpolation algorithm. Finally, all the time marks and the corresponding amplitudes of vibration signal are arranged and the time marks are transformed into the angle domain to obtain the resampling signal. Speed-up and speed-down faulty bearing signals are employed to verify the validity of the proposed method, and experimental results show that the proposed method is effective for diagnosing faulty bearings. Furthermore, the traditional order tracking techniques are applied to the experimental bearing signals, and the results show that the proposed method produces higher accurate outcomes in less computation time.展开更多
An efficient resampling reliability approach was developed to consider the effect of statistical uncertainties in input properties arising due to insufficient data when estimating the reliability of rock slopes and tu...An efficient resampling reliability approach was developed to consider the effect of statistical uncertainties in input properties arising due to insufficient data when estimating the reliability of rock slopes and tunnels.This approach considers the effect of uncertainties in both distribution parameters(mean and standard deviation)and types of input properties.Further,the approach was generalized to make it capable of analyzing complex problems with explicit/implicit performance functions(PFs),single/multiple PFs,and correlated/non-correlated input properties.It couples resampling statistical tool,i.e.jackknife,with advanced reliability tools like Latin hypercube sampling(LHS),Sobol’s global sensitivity,moving least square-response surface method(MLS-RSM),and Nataf’s transformation.The developed approach was demonstrated for four cases encompassing different types.Results were compared with a recently developed bootstrap-based resampling reliability approach.The results show that the approach is accurate and significantly efficient compared with the bootstrap-based approach.The proposed approach reflects the effect of statistical uncertainties of input properties by estimating distributions/confidence intervals of reliability index/probability of failure(s)instead of their fixed-point estimates.Further,sufficiently accurate results were obtained by considering uncertainties in distribution parameters only and ignoring those in distribution types.展开更多
The merging of a panchromatic (PAN) image with a multispectral satellite image (MSI) to increase the spatial resolution of the MSI, while simultaneously preserving its spectral information is classically referred as P...The merging of a panchromatic (PAN) image with a multispectral satellite image (MSI) to increase the spatial resolution of the MSI, while simultaneously preserving its spectral information is classically referred as PAN-sharpening. We employed a recent dataset derived from very high resolution of WorldView-2 satellite (PAN and MSI) for two test sites (one over an urban area and the other over Antarctica), to comprehensively evaluate the performance of six existing PAN-sharpening algorithms. The algorithms under consideration were the Gram-Schmidt (GS), Ehlers fusion (EF), modified hue-intensity-saturation (Mod-HIS), high pass filtering (HPF), the Brovey transform (BT), and wavelet-based principal component analysis (W-PC). Quality assessment of the sharpened images was carried out by using 20 quality indices. We also analyzed the performance of nearest neighbour (NN), bilinear interpolation (BI), and cubic convolution (CC) resampling methods to test their practicability in the PAN-sharpening process. Our results indicate that the comprehensive performance of PAN-sharpening methods decreased in the following order: GS > W-PC > EF > HPF > Mod-HIS > BT, while resampling methods followed the order: NN > BI > CC.展开更多
In this paper, we describe resourceefficient hardware architectures for softwaredefined radio (SDR) frontends. These architectures are made efficient by using a polyphase channelizer that performs arbitrary sample r...In this paper, we describe resourceefficient hardware architectures for softwaredefined radio (SDR) frontends. These architectures are made efficient by using a polyphase channelizer that performs arbitrary sample rate changes, frequency selection, and bandwidth control. We discuss area, time, and power optimization for field programmable gate array (FPGA) based architectures in an Mpath polyphase filter bank with modified Npath polyphase filter. Such systems allow resampling by arbitrary ratios while simultaneously performing baseband aliasing from center frequencies at Nyquist zones that are not multiples of the output sample rate. A nonmaximally decimated polyphase filter bank, where the number of data loads is not equal to the number of M subfilters, processes M subfilters in a time period that is either less than or greater than the Mdataload ' s time period. We present a loadprocess architecture (LPA) and a runtime architecture (RA) (based on serial polyphase structure) which have different scheduling. In LPA, Nsubfilters are loaded, and then M subfilters are processed at a clock rate that is a multiple of the input data rate. This is necessary to meet the output time constraint of the down-sampled data. In RA, Msubfilters processes are efficiently scheduled within Ndataload time while simultaneously loading N subfilters. This requires reduced clock rates compared with LPA, and potentially less power is consumed. A polyphase filter bank that uses different resampling factors for maximally decimated, underdecimated, overdecimated, and combined upand downsampled scenarios is used as a case study, and an analysis of area, time, and power for their FPGA architectures is given. For resourceoptimized SDR frontends, RA is superior for reducing operating clock rates and dynamic power consumption. RA is also superior for reducing area resources, except when indices are prestored in LUTs.展开更多
The design, analysis and parallel implementation of particle filter(PF) were investigated. Firstly, to tackle the particle degeneracy problem in the PF, an iterated importance density function(IIDF) was proposed, wher...The design, analysis and parallel implementation of particle filter(PF) were investigated. Firstly, to tackle the particle degeneracy problem in the PF, an iterated importance density function(IIDF) was proposed, where a new term associating with the current measurement information(CMI) was introduced into the expression of the sampled particles. Through the repeated use of the least squares estimate, the CMI can be integrated into the sampling stage in an iterative manner, conducing to the greatly improved sampling quality. By running the IIDF, an iterated PF(IPF) can be obtained. Subsequently, a parallel resampling(PR) was proposed for the purpose of parallel implementation of IPF, whose main idea was the same as systematic resampling(SR) but performed differently. The PR directly used the integral part of the product of the particle weight and particle number as the number of times that a particle was replicated, and it simultaneously eliminated the particles with the smallest weights, which are the two key differences from the SR. The detailed implementation procedures on the graphics processing unit of IPF based on the PR were presented at last. The performance of the IPF, PR and their parallel implementations are illustrated via one-dimensional numerical simulation and practical application of passive radar target tracking.展开更多
The estimation of image resampling factors is an important problem in image forensics.Among all the resampling factor estimation methods,spectrumbased methods are one of the most widely used methods and have attracted...The estimation of image resampling factors is an important problem in image forensics.Among all the resampling factor estimation methods,spectrumbased methods are one of the most widely used methods and have attracted a lot of research interest.However,because of inherent ambiguity,spectrum-based methods fail to discriminate upscale and downscale operations without any prior information.In general,the application of resampling leaves detectable traces in both spatial domain and frequency domain of a resampled image.Firstly,the resampling process will introduce correlations between neighboring pixels.In this case,a set of periodic pixels that are correlated to their neighbors can be found in a resampled image.Secondly,the resampled image has distinct and strong peaks on spectrum while the spectrum of original image has no clear peaks.Hence,in this paper,we propose a dual-stream convolutional neural network for image resampling factors estimation.One of the two streams is gray stream whose purpose is to extract resampling traces features directly from the rescaled images.The other is frequency stream that discovers the differences of spectrum between rescaled and original images.The features from two streams are then fused to construct a feature representation including the resampling traces left in spatial and frequency domain,which is later fed into softmax layer for resampling factor estimation.Experimental results show that the proposed method is effective on resampling factor estimation and outperforms some CNN-based methods.展开更多
In order to deal with the particle degeneracy and impov- erishment problems existed in particle filters, a modified sequential importance resampling (MSIR) filter is proposed. In this filter, the resampling is trans...In order to deal with the particle degeneracy and impov- erishment problems existed in particle filters, a modified sequential importance resampling (MSIR) filter is proposed. In this filter, the resampling is translated into an evolutional process just like the biological evolution. A particle generator is constructed, which introduces the current measurement information (CMI) into the resampled particles. In the evolution, new particles are first pro- duced through the particle generator, each of which is essentially an unbiased estimation of the current true state. Then, new and old particles are recombined for the sake of raising the diversity among the particles. Finally, those particles who have low quality are eliminated. Through the evolution, all the particles retained are regarded as the optimal ones, and these particles are utilized to update the current state. By using the proposed resampling approach, not only the CMI is incorporated into each resampled particle, but also the particle degeneracy and the loss of diver- sity among the particles are mitigated, resulting in the improved estimation accuracy. Simulation results show the superiorities of the proposed filter over the standard sequential importance re- sampling (SIR) filter, auxiliary particle filter and unscented Kalman particle filter.展开更多
Object tracking with abrupt motion is an important research topic and has attracted wide attention.To obtain accurate tracking results,an improved particle filter tracking algorithm based on sparse representation and ...Object tracking with abrupt motion is an important research topic and has attracted wide attention.To obtain accurate tracking results,an improved particle filter tracking algorithm based on sparse representation and nonlinear resampling is proposed in this paper. First,the sparse representation is used to compute particle weights by considering the fact that the weights are sparse when the object moves abruptly,so the potential object region can be predicted more precisely. Then,a nonlinear resampling process is proposed by utilizing the nonlinear sorting strategy,which can solve the problem of particle diversity impoverishment caused by traditional resampling methods. Experimental results based on videos containing objects with various abrupt motions have demonstrated the effectiveness of the proposed algorithm.展开更多
In this paper, large sample properties of resampling tests of hypotheses on the population mean resampled according to the empirical likelihood and the Kullback-Leibler criteria are investigated. It is proved that und...In this paper, large sample properties of resampling tests of hypotheses on the population mean resampled according to the empirical likelihood and the Kullback-Leibler criteria are investigated. It is proved that under the null hypothesis both of them are superior to the classical one.展开更多
Speech resampling is a typical tempering behavior,which is often integrated into various speech forgeries,such as splicing,electronic disguising,quality faking and so on.By analyzing the principle of resampling,we fou...Speech resampling is a typical tempering behavior,which is often integrated into various speech forgeries,such as splicing,electronic disguising,quality faking and so on.By analyzing the principle of resampling,we found that,compared with natural speech,the inconsistency between the bandwidth of the resampled speech and its sampling ratio will be caused because the interpolation process in resampling is imperfect.Based on our observation,a new resampling detection algorithm based on the inconsistency of band energy is proposed.First,according to the sampling ratio of the suspected speech,a band-pass Butterworth filter is designed to filter out the residual signal.Then,the logarithmic ratio of band energy is calculated by the suspected speech and the filtered speech.Finally,with the logarithmic ratio,the resampled and original speech can be discriminated.The experimental results show that the proposed algorithm can effectively detect the resampling behavior under various conditions and is robust to MP3 compression.展开更多
Two variants of systematic resampling (S-RS) are proposed to increase the diversity of particles and thereby improve the performance of particle filtering when it is utilized for detection in Bell Laboratories Layer...Two variants of systematic resampling (S-RS) are proposed to increase the diversity of particles and thereby improve the performance of particle filtering when it is utilized for detection in Bell Laboratories Layered Space-Time (BLAST) systems. In the first variant, Markov chain Monte Carlo transition is integrated in the S-RS procedure to increase the diversity of particles with large importance weights. In the second one, all particles are first partitioned into two sets according to their importance weights, and then a double S-RS is introduced to increase the diversity of particles with small importance weights. Simulation results show that both variants can improve the bit error performance efficiently compared with the standard S-P^S with little increased complexity.展开更多
Free energy calculations may provide vital information for studying various chemical and biological processes.Quantum mechanical methods are required to accurately describe interaction energies,but their computations ...Free energy calculations may provide vital information for studying various chemical and biological processes.Quantum mechanical methods are required to accurately describe interaction energies,but their computations are often too demanding for conformational sampling.As a remedy,level correction schemes that allow calculating high level free energies based on conformations from lower level simulations have been developed.Here,we present a variation of a Monte Carlo(MC)resampling approach in relation to the weighted histogram analysis method(WHAM).We show that our scheme can generate free energy surfaces that can practically converge to the exact one with sufficient sampling,and that it treats cases with insufficient sampling in a more stable manner than the conventional WHAM-based level correction scheme.It can also provide a guide for checking the uncertainty of the levelcorrected surface and a well-defined criterion for deciding the extent of smoothing on the free energy surface for its visual improvement.We demonstrate these aspects by obtaining the free energy maps associated with the alanine dipeptide and proton transfer network of the KillerRed protein in explicit water,and exemplify that the MC resampled WHAM scheme can be a practical tool for producing free energy surfaces of realistic systems.展开更多
Strong spatial variance of the imaging parameters and serious geometric distortion of the image are induced by the acceleration and vertical velocity in a high-squint synthetic aperture radar(SAR)mounted on maneuverin...Strong spatial variance of the imaging parameters and serious geometric distortion of the image are induced by the acceleration and vertical velocity in a high-squint synthetic aperture radar(SAR)mounted on maneuvering platforms.In this paper,a frequency-domain imaging algorithm is proposed based on a novel slant range model and azimuth perturbation resampling.First,a novel slant range model is presented for mitigating the geometric distortion according to the equal squint angle curve on the ground surface.Second,the correction of azimuth-dependent range cell migration(RCM)is achieved by introducing a high-order time-domain perturbation function.Third,an azimuth perturbation resampling method is proposed for azimuth compression.The azimuth resampling and the time-domain perturbation are used for correcting first-order and high-order azimuthal spatial-variant components,respectively.Experimental results illustrate that the proposed algorithm can improve the focusing quality and the geometric distortion correction accuracy of the imaging scene effectively.展开更多
With the rapid progress of the image processing software, the image forgery can leave no visual clues on the tampered regions and make us unable to authenticate the image. In general, the image forgery technologies of...With the rapid progress of the image processing software, the image forgery can leave no visual clues on the tampered regions and make us unable to authenticate the image. In general, the image forgery technologies often utilizes the scaling, rotation or skewing operations to tamper some regions in the image, in which the resampling and interpolation processes are often demanded. By observing the detectable periodic distribution properties generated from the resampling and interpolation processes, we propose a novel method based on the intrinsic properties of resampling scheme to detect the tampered regions. The proposed method applies the pre-calculated resampling weighting table to detect the periodic properties of prediction error distribution. The experimental results show that the proposed method outperforms the conventional methods in terms of efficiency and accuracy.展开更多
This paper proposes a resampling simulator that will calculate probabilities of detecting invasive species infesting hosts that occur in large numbers. Different methods were examined to determine the bias of observed...This paper proposes a resampling simulator that will calculate probabilities of detecting invasive species infesting hosts that occur in large numbers. Different methods were examined to determine the bias of observed cumulative distribution functions (c.d.f.s), generated from prototype resampling simulators. One involved seeing if they matched theoretical c.d.f.s, which were generated using formulae for calculating the probability of the union of many events (union formulae), which are known to be correct. Others involved assessing the bias of observed c.d.f.s, generated from using prototype resampling simulators operating on much larger simulated populations, when computation of theoretical c.d.f.s from the union formulae was not practical. Examples are given for using the proposed resampling simulator for detecting an invasive insect pest within the context of an invasive species management system.展开更多
In data envelopment analysis (DEA), input and output values are subject to change for several reasons. Such variations differ in their input/output items and their decision-making units (DMUs). Hence, DEA efficiency s...In data envelopment analysis (DEA), input and output values are subject to change for several reasons. Such variations differ in their input/output items and their decision-making units (DMUs). Hence, DEA efficiency scores need to be examined by considering these factors. In this paper, we propose new resampling models based on these variations for gauging the confidence intervals of DEA scores. The first model utilizes past-present data for estimating data variations imposing chronological order weights which are supplied by Lucas series (a variant of Fibonacci series). The second model deals with future prospects. This model aims at forecasting the future efficiency score and its confidence interval for each DMU. We applied our models to a dataset composed of Japanese municipal hospitals.展开更多
Medical image application in clinical diagnosis and treatment is becoming more and more widely, How to use a large number of images in the image management system and it is a very important issue how to assist doctors...Medical image application in clinical diagnosis and treatment is becoming more and more widely, How to use a large number of images in the image management system and it is a very important issue how to assist doctors to analyze and diagnose. This paper studies the medical image retrieval based on multi-layer resampling template under the thought of the wavelet decomposition, the image retrieval method consists of two retrieval process which is coarse and fine retrieval. Coarse retrieval process is the medical image retrieval process based on the image contour features. Fine retrieval process is the medical image retrieval process based on multi-layer resampling template, a multi-layer sampling operator is employed to extract image resampling images each layer, then these resampling images are retrieved step by step to finish the process from coarse to fine retrieval.展开更多
基金supported by the National Natural Science Foundation of China(Grant Nos.61575205 and 62175022)Sichuan Natural Science Foundation(2022NSFSC0803)Sichuan Science and Technology Program(2021JDRC0035).
文摘The nonuniform distribution of interference spectrum in wavenumber k-space is a key issue to limit the imaging quality of Fourier-domain optical coherence tomography(FD-OCT).At present,the reconstruction quality at different depths among a variety of processing methods in k-space is still uncertain.Using simulated and experimental interference spectra at different depths,the effects of common six processing methods including uniform resampling(linear interpolation(LI),cubic spline interpolation(CSI),time-domain interpolation(TDI),and K-B window convolution)and nonuniform sampling direct-reconstruction(Lomb periodogram(LP)and nonuniform discrete Fourier transform(NDFT))on the reconstruction quality of FD-OCT were quantitatively analyzed and compared in this work.The results obtained by using simulated and experimental data were coincident.From the experimental results,the averaged peak intensity,axial resolution,and signal-to-noise ratio(SNR)of NDFT at depth from 0.5 to 3.0mm were improved by about 1.9 dB,1.4 times,and 11.8 dB,respectively,compared to the averaged indices of all the uniform resampling methods at all depths.Similarly,the improvements of the above three indices of LP were 2.0 dB,1.4 times,and 11.7 dB,respectively.The analysis method and the results obtained in this work are helpful to select an appropriate processing method in k-space,so as to improve the imaging quality of FD-OCT.
基金Project supported by the National Key Research and Development Program of China(Grant No.2020YFC1807905)the National Natural Science Foundation of China(Grant Nos.52079090 and U20A20316)the Basic Research Program of Qinghai Province(Grant No.2022-ZJ-704).
文摘Neural network methods have been widely used in many fields of scientific research with the rapid increase of computing power.The physics-informed neural networks(PINNs)have received much attention as a major breakthrough in solving partial differential equations using neural networks.In this paper,a resampling technique based on the expansion-shrinkage point(ESP)selection strategy is developed to dynamically modify the distribution of training points in accordance with the performance of the neural networks.In this new approach both training sites with slight changes in residual values and training points with large residuals are taken into account.In order to make the distribution of training points more uniform,the concept of continuity is further introduced and incorporated.This method successfully addresses the issue that the neural network becomes ill or even crashes due to the extensive alteration of training point distribution.The effectiveness of the improved physics-informed neural networks with expansion-shrinkage resampling is demonstrated through a series of numerical experiments.
基金This work was funded by the Deanship of Scientific Research at Jouf University under Grant Number(DSR2022-RG-0105).
文摘Twitter has emerged as a platform that produces new data every day through its users which can be utilized for various purposes.People express their unique ideas and views onmultiple topics thus providing vast knowledge.Sentiment analysis is critical from the corporate and political perspectives as it can impact decision-making.Since the proliferation of COVID-19,it has become an important challenge to detect the sentiment of COVID-19-related tweets so that people’s opinions can be tracked.The purpose of this research is to detect the sentiment of people regarding this problem with limited data as it can be challenging considering the various textual characteristics that must be analyzed.Hence,this research presents a deep learning-based model that utilizes the positives of random minority oversampling combined with class label analysis to achieve the best results for sentiment analysis.This research specifically focuses on utilizing class label analysis to deal with the multiclass problem by combining the class labels with a similar overall sentiment.This can be particularly helpful when dealing with smaller datasets.Furthermore,our proposed model integrates various preprocessing steps with random minority oversampling and various deep learning algorithms including standard deep learning and bi-directional deep learning algorithms.This research explores several algorithms and their impact on sentiment analysis tasks and concludes that bidirectional neural networks do not provide any advantage over standard neural networks as standard Neural Networks provide slightly better results than their bidirectional counterparts.The experimental results validate that our model offers excellent results with a validation accuracy of 92.5%and an F1 measure of 0.92.
基金Fundamental Research Funds for the Central Universities(No.2016JBM051)
文摘In order to address the issues of traditional resampling algorithms involving computational accuracy and efficiency in rolling element bearing fault diagnosis, an equal division impulse-based(EDI-based) resampling algorithm is proposed. First, the time marks of every rising edge of the rotating speed pulse and the corresponding amplitudes of faulty bearing vibration signal are determined. Then, every adjacent the rotating pulse is divided equally, and the time marks in every adjacent rotating speed pulses and the corresponding amplitudes of vibration signal are obtained by the interpolation algorithm. Finally, all the time marks and the corresponding amplitudes of vibration signal are arranged and the time marks are transformed into the angle domain to obtain the resampling signal. Speed-up and speed-down faulty bearing signals are employed to verify the validity of the proposed method, and experimental results show that the proposed method is effective for diagnosing faulty bearings. Furthermore, the traditional order tracking techniques are applied to the experimental bearing signals, and the results show that the proposed method produces higher accurate outcomes in less computation time.
文摘An efficient resampling reliability approach was developed to consider the effect of statistical uncertainties in input properties arising due to insufficient data when estimating the reliability of rock slopes and tunnels.This approach considers the effect of uncertainties in both distribution parameters(mean and standard deviation)and types of input properties.Further,the approach was generalized to make it capable of analyzing complex problems with explicit/implicit performance functions(PFs),single/multiple PFs,and correlated/non-correlated input properties.It couples resampling statistical tool,i.e.jackknife,with advanced reliability tools like Latin hypercube sampling(LHS),Sobol’s global sensitivity,moving least square-response surface method(MLS-RSM),and Nataf’s transformation.The developed approach was demonstrated for four cases encompassing different types.Results were compared with a recently developed bootstrap-based resampling reliability approach.The results show that the approach is accurate and significantly efficient compared with the bootstrap-based approach.The proposed approach reflects the effect of statistical uncertainties of input properties by estimating distributions/confidence intervals of reliability index/probability of failure(s)instead of their fixed-point estimates.Further,sufficiently accurate results were obtained by considering uncertainties in distribution parameters only and ignoring those in distribution types.
文摘The merging of a panchromatic (PAN) image with a multispectral satellite image (MSI) to increase the spatial resolution of the MSI, while simultaneously preserving its spectral information is classically referred as PAN-sharpening. We employed a recent dataset derived from very high resolution of WorldView-2 satellite (PAN and MSI) for two test sites (one over an urban area and the other over Antarctica), to comprehensively evaluate the performance of six existing PAN-sharpening algorithms. The algorithms under consideration were the Gram-Schmidt (GS), Ehlers fusion (EF), modified hue-intensity-saturation (Mod-HIS), high pass filtering (HPF), the Brovey transform (BT), and wavelet-based principal component analysis (W-PC). Quality assessment of the sharpened images was carried out by using 20 quality indices. We also analyzed the performance of nearest neighbour (NN), bilinear interpolation (BI), and cubic convolution (CC) resampling methods to test their practicability in the PAN-sharpening process. Our results indicate that the comprehensive performance of PAN-sharpening methods decreased in the following order: GS > W-PC > EF > HPF > Mod-HIS > BT, while resampling methods followed the order: NN > BI > CC.
文摘In this paper, we describe resourceefficient hardware architectures for softwaredefined radio (SDR) frontends. These architectures are made efficient by using a polyphase channelizer that performs arbitrary sample rate changes, frequency selection, and bandwidth control. We discuss area, time, and power optimization for field programmable gate array (FPGA) based architectures in an Mpath polyphase filter bank with modified Npath polyphase filter. Such systems allow resampling by arbitrary ratios while simultaneously performing baseband aliasing from center frequencies at Nyquist zones that are not multiples of the output sample rate. A nonmaximally decimated polyphase filter bank, where the number of data loads is not equal to the number of M subfilters, processes M subfilters in a time period that is either less than or greater than the Mdataload ' s time period. We present a loadprocess architecture (LPA) and a runtime architecture (RA) (based on serial polyphase structure) which have different scheduling. In LPA, Nsubfilters are loaded, and then M subfilters are processed at a clock rate that is a multiple of the input data rate. This is necessary to meet the output time constraint of the down-sampled data. In RA, Msubfilters processes are efficiently scheduled within Ndataload time while simultaneously loading N subfilters. This requires reduced clock rates compared with LPA, and potentially less power is consumed. A polyphase filter bank that uses different resampling factors for maximally decimated, underdecimated, overdecimated, and combined upand downsampled scenarios is used as a case study, and an analysis of area, time, and power for their FPGA architectures is given. For resourceoptimized SDR frontends, RA is superior for reducing operating clock rates and dynamic power consumption. RA is also superior for reducing area resources, except when indices are prestored in LUTs.
基金Project(61372136) supported by the National Natural Science Foundation of China
文摘The design, analysis and parallel implementation of particle filter(PF) were investigated. Firstly, to tackle the particle degeneracy problem in the PF, an iterated importance density function(IIDF) was proposed, where a new term associating with the current measurement information(CMI) was introduced into the expression of the sampled particles. Through the repeated use of the least squares estimate, the CMI can be integrated into the sampling stage in an iterative manner, conducing to the greatly improved sampling quality. By running the IIDF, an iterated PF(IPF) can be obtained. Subsequently, a parallel resampling(PR) was proposed for the purpose of parallel implementation of IPF, whose main idea was the same as systematic resampling(SR) but performed differently. The PR directly used the integral part of the product of the particle weight and particle number as the number of times that a particle was replicated, and it simultaneously eliminated the particles with the smallest weights, which are the two key differences from the SR. The detailed implementation procedures on the graphics processing unit of IPF based on the PR were presented at last. The performance of the IPF, PR and their parallel implementations are illustrated via one-dimensional numerical simulation and practical application of passive radar target tracking.
基金the National Natural Science Foundation of China(No.62072480)the Key Areas R&D Program of Guangdong(No.2019B010136002)the Key ScientificResearch Program of Guangzhou(No.201804020068).
文摘The estimation of image resampling factors is an important problem in image forensics.Among all the resampling factor estimation methods,spectrumbased methods are one of the most widely used methods and have attracted a lot of research interest.However,because of inherent ambiguity,spectrum-based methods fail to discriminate upscale and downscale operations without any prior information.In general,the application of resampling leaves detectable traces in both spatial domain and frequency domain of a resampled image.Firstly,the resampling process will introduce correlations between neighboring pixels.In this case,a set of periodic pixels that are correlated to their neighbors can be found in a resampled image.Secondly,the resampled image has distinct and strong peaks on spectrum while the spectrum of original image has no clear peaks.Hence,in this paper,we propose a dual-stream convolutional neural network for image resampling factors estimation.One of the two streams is gray stream whose purpose is to extract resampling traces features directly from the rescaled images.The other is frequency stream that discovers the differences of spectrum between rescaled and original images.The features from two streams are then fused to construct a feature representation including the resampling traces left in spatial and frequency domain,which is later fed into softmax layer for resampling factor estimation.Experimental results show that the proposed method is effective on resampling factor estimation and outperforms some CNN-based methods.
基金supported by the National Natural Science Foundation of China(61372136)
文摘In order to deal with the particle degeneracy and impov- erishment problems existed in particle filters, a modified sequential importance resampling (MSIR) filter is proposed. In this filter, the resampling is translated into an evolutional process just like the biological evolution. A particle generator is constructed, which introduces the current measurement information (CMI) into the resampled particles. In the evolution, new particles are first pro- duced through the particle generator, each of which is essentially an unbiased estimation of the current true state. Then, new and old particles are recombined for the sake of raising the diversity among the particles. Finally, those particles who have low quality are eliminated. Through the evolution, all the particles retained are regarded as the optimal ones, and these particles are utilized to update the current state. By using the proposed resampling approach, not only the CMI is incorporated into each resampled particle, but also the particle degeneracy and the loss of diver- sity among the particles are mitigated, resulting in the improved estimation accuracy. Simulation results show the superiorities of the proposed filter over the standard sequential importance re- sampling (SIR) filter, auxiliary particle filter and unscented Kalman particle filter.
基金Supported by the National Natural Science Foundation of China(61701029)
文摘Object tracking with abrupt motion is an important research topic and has attracted wide attention.To obtain accurate tracking results,an improved particle filter tracking algorithm based on sparse representation and nonlinear resampling is proposed in this paper. First,the sparse representation is used to compute particle weights by considering the fact that the weights are sparse when the object moves abruptly,so the potential object region can be predicted more precisely. Then,a nonlinear resampling process is proposed by utilizing the nonlinear sorting strategy,which can solve the problem of particle diversity impoverishment caused by traditional resampling methods. Experimental results based on videos containing objects with various abrupt motions have demonstrated the effectiveness of the proposed algorithm.
文摘In this paper, large sample properties of resampling tests of hypotheses on the population mean resampled according to the empirical likelihood and the Kullback-Leibler criteria are investigated. It is proved that under the null hypothesis both of them are superior to the classical one.
基金This work was supported by the National Natural Science Foundation of China(Grant No.61300055,U1736215,61672302)Zhejiang Natural Science Foundation(Grant No.LY17F020010,LZ15F020002)+1 种基金Ningbo Natural Science Foundation(Grant No.2017A610123)Ningbo University Fund(Grant No.XKXL1509,XKXL1503)and K.C.Wong Magna Fund in Ningbo University.
文摘Speech resampling is a typical tempering behavior,which is often integrated into various speech forgeries,such as splicing,electronic disguising,quality faking and so on.By analyzing the principle of resampling,we found that,compared with natural speech,the inconsistency between the bandwidth of the resampled speech and its sampling ratio will be caused because the interpolation process in resampling is imperfect.Based on our observation,a new resampling detection algorithm based on the inconsistency of band energy is proposed.First,according to the sampling ratio of the suspected speech,a band-pass Butterworth filter is designed to filter out the residual signal.Then,the logarithmic ratio of band energy is calculated by the suspected speech and the filtered speech.Finally,with the logarithmic ratio,the resampled and original speech can be discriminated.The experimental results show that the proposed algorithm can effectively detect the resampling behavior under various conditions and is robust to MP3 compression.
基金supported by the National Natural Science Foundation of China(6047209860502046U0635003).
文摘Two variants of systematic resampling (S-RS) are proposed to increase the diversity of particles and thereby improve the performance of particle filtering when it is utilized for detection in Bell Laboratories Layered Space-Time (BLAST) systems. In the first variant, Markov chain Monte Carlo transition is integrated in the S-RS procedure to increase the diversity of particles with large importance weights. In the second one, all particles are first partitioned into two sets according to their importance weights, and then a double S-RS is introduced to increase the diversity of particles with small importance weights. Simulation results show that both variants can improve the bit error performance efficiently compared with the standard S-P^S with little increased complexity.
基金supported by the Mid-career Researcher Program(No.2017R1A2B3004946)through National Research Foundationfunded by Ministry of Science and ICT of Korea.
文摘Free energy calculations may provide vital information for studying various chemical and biological processes.Quantum mechanical methods are required to accurately describe interaction energies,but their computations are often too demanding for conformational sampling.As a remedy,level correction schemes that allow calculating high level free energies based on conformations from lower level simulations have been developed.Here,we present a variation of a Monte Carlo(MC)resampling approach in relation to the weighted histogram analysis method(WHAM).We show that our scheme can generate free energy surfaces that can practically converge to the exact one with sufficient sampling,and that it treats cases with insufficient sampling in a more stable manner than the conventional WHAM-based level correction scheme.It can also provide a guide for checking the uncertainty of the levelcorrected surface and a well-defined criterion for deciding the extent of smoothing on the free energy surface for its visual improvement.We demonstrate these aspects by obtaining the free energy maps associated with the alanine dipeptide and proton transfer network of the KillerRed protein in explicit water,and exemplify that the MC resampled WHAM scheme can be a practical tool for producing free energy surfaces of realistic systems.
基金supported by the basic research projects of Army Engineering University.
文摘Strong spatial variance of the imaging parameters and serious geometric distortion of the image are induced by the acceleration and vertical velocity in a high-squint synthetic aperture radar(SAR)mounted on maneuvering platforms.In this paper,a frequency-domain imaging algorithm is proposed based on a novel slant range model and azimuth perturbation resampling.First,a novel slant range model is presented for mitigating the geometric distortion according to the equal squint angle curve on the ground surface.Second,the correction of azimuth-dependent range cell migration(RCM)is achieved by introducing a high-order time-domain perturbation function.Third,an azimuth perturbation resampling method is proposed for azimuth compression.The azimuth resampling and the time-domain perturbation are used for correcting first-order and high-order azimuthal spatial-variant components,respectively.Experimental results illustrate that the proposed algorithm can improve the focusing quality and the geometric distortion correction accuracy of the imaging scene effectively.
文摘With the rapid progress of the image processing software, the image forgery can leave no visual clues on the tampered regions and make us unable to authenticate the image. In general, the image forgery technologies often utilizes the scaling, rotation or skewing operations to tamper some regions in the image, in which the resampling and interpolation processes are often demanded. By observing the detectable periodic distribution properties generated from the resampling and interpolation processes, we propose a novel method based on the intrinsic properties of resampling scheme to detect the tampered regions. The proposed method applies the pre-calculated resampling weighting table to detect the periodic properties of prediction error distribution. The experimental results show that the proposed method outperforms the conventional methods in terms of efficiency and accuracy.
文摘This paper proposes a resampling simulator that will calculate probabilities of detecting invasive species infesting hosts that occur in large numbers. Different methods were examined to determine the bias of observed cumulative distribution functions (c.d.f.s), generated from prototype resampling simulators. One involved seeing if they matched theoretical c.d.f.s, which were generated using formulae for calculating the probability of the union of many events (union formulae), which are known to be correct. Others involved assessing the bias of observed c.d.f.s, generated from using prototype resampling simulators operating on much larger simulated populations, when computation of theoretical c.d.f.s from the union formulae was not practical. Examples are given for using the proposed resampling simulator for detecting an invasive insect pest within the context of an invasive species management system.
文摘In data envelopment analysis (DEA), input and output values are subject to change for several reasons. Such variations differ in their input/output items and their decision-making units (DMUs). Hence, DEA efficiency scores need to be examined by considering these factors. In this paper, we propose new resampling models based on these variations for gauging the confidence intervals of DEA scores. The first model utilizes past-present data for estimating data variations imposing chronological order weights which are supplied by Lucas series (a variant of Fibonacci series). The second model deals with future prospects. This model aims at forecasting the future efficiency score and its confidence interval for each DMU. We applied our models to a dataset composed of Japanese municipal hospitals.
基金Supported by Foundation of Northeast Petroleum University(XN2014106)
文摘Medical image application in clinical diagnosis and treatment is becoming more and more widely, How to use a large number of images in the image management system and it is a very important issue how to assist doctors to analyze and diagnose. This paper studies the medical image retrieval based on multi-layer resampling template under the thought of the wavelet decomposition, the image retrieval method consists of two retrieval process which is coarse and fine retrieval. Coarse retrieval process is the medical image retrieval process based on the image contour features. Fine retrieval process is the medical image retrieval process based on multi-layer resampling template, a multi-layer sampling operator is employed to extract image resampling images each layer, then these resampling images are retrieved step by step to finish the process from coarse to fine retrieval.