Global variance reduction is a bottleneck in Monte Carlo shielding calculations.The global variance reduction problem requires that the statistical error of the entire space is uniform.This study proposed a grid-AIS m...Global variance reduction is a bottleneck in Monte Carlo shielding calculations.The global variance reduction problem requires that the statistical error of the entire space is uniform.This study proposed a grid-AIS method for the global variance reduction problem based on the AIS method,which was implemented in the Monte Carlo program MCShield.The proposed method was validated using the VENUS-Ⅲ international benchmark problem and a self-shielding calculation example.The results from the VENUS-Ⅲ benchmark problem showed that the grid-AIS method achieved a significant reduction in the variance of the statistical errors of the MESH grids,decreasing from 1.08×10^(-2) to 3.84×10^(-3),representing a 64.00% reduction.This demonstrates that the grid-AIS method is effective in addressing global issues.The results of the selfshielding calculation demonstrate that the grid-AIS method produced accurate computational results.Moreover,the grid-AIS method exhibited a computational efficiency approximately one order of magnitude higher than that of the AIS method and approximately two orders of magnitude higher than that of the conventional Monte Carlo method.展开更多
In order to deal with the particle degeneracy and impov- erishment problems existed in particle filters, a modified sequential importance resampling (MSIR) filter is proposed. In this filter, the resampling is trans...In order to deal with the particle degeneracy and impov- erishment problems existed in particle filters, a modified sequential importance resampling (MSIR) filter is proposed. In this filter, the resampling is translated into an evolutional process just like the biological evolution. A particle generator is constructed, which introduces the current measurement information (CMI) into the resampled particles. In the evolution, new particles are first pro- duced through the particle generator, each of which is essentially an unbiased estimation of the current true state. Then, new and old particles are recombined for the sake of raising the diversity among the particles. Finally, those particles who have low quality are eliminated. Through the evolution, all the particles retained are regarded as the optimal ones, and these particles are utilized to update the current state. By using the proposed resampling approach, not only the CMI is incorporated into each resampled particle, but also the particle degeneracy and the loss of diver- sity among the particles are mitigated, resulting in the improved estimation accuracy. Simulation results show the superiorities of the proposed filter over the standard sequential importance re- sampling (SIR) filter, auxiliary particle filter and unscented Kalman particle filter.展开更多
The design, analysis and parallel implementation of particle filter(PF) were investigated. Firstly, to tackle the particle degeneracy problem in the PF, an iterated importance density function(IIDF) was proposed, wher...The design, analysis and parallel implementation of particle filter(PF) were investigated. Firstly, to tackle the particle degeneracy problem in the PF, an iterated importance density function(IIDF) was proposed, where a new term associating with the current measurement information(CMI) was introduced into the expression of the sampled particles. Through the repeated use of the least squares estimate, the CMI can be integrated into the sampling stage in an iterative manner, conducing to the greatly improved sampling quality. By running the IIDF, an iterated PF(IPF) can be obtained. Subsequently, a parallel resampling(PR) was proposed for the purpose of parallel implementation of IPF, whose main idea was the same as systematic resampling(SR) but performed differently. The PR directly used the integral part of the product of the particle weight and particle number as the number of times that a particle was replicated, and it simultaneously eliminated the particles with the smallest weights, which are the two key differences from the SR. The detailed implementation procedures on the graphics processing unit of IPF based on the PR were presented at last. The performance of the IPF, PR and their parallel implementations are illustrated via one-dimensional numerical simulation and practical application of passive radar target tracking.展开更多
The reliability and sensitivity analyses of stator blade regulator usually involve complex characteristics like highnonlinearity,multi-failure regions,and small failure probability,which brings in unacceptable computi...The reliability and sensitivity analyses of stator blade regulator usually involve complex characteristics like highnonlinearity,multi-failure regions,and small failure probability,which brings in unacceptable computing efficiency and accuracy of the current analysismethods.In this case,by fitting the implicit limit state function(LSF)with active Kriging(AK)model and reducing candidate sample poolwith adaptive importance sampling(AIS),a novel AK-AIS method is proposed.Herein,theAKmodel andMarkov chainMonte Carlo(MCMC)are first established to identify the most probable failure region(s)(MPFRs),and the adaptive kernel density estimation(AKDE)importance sampling function is constructed to select the candidate samples.With the best samples sequentially attained in the reduced candidate samples and employed to update the Kriging-fitted LSF,the failure probability and sensitivity indices are acquired at a lower cost.The proposed method is verified by twomulti-failure numerical examples,and then applied to the reliability and sensitivity analyses of a typical stator blade regulator.Withmethods comparison,the proposed AK-AIS is proven to hold the computing advantages on accuracy and efficiency in complex reliability and sensitivity analysis problems.展开更多
The nonuniform distribution of interference spectrum in wavenumber k-space is a key issue to limit the imaging quality of Fourier-domain optical coherence tomography(FD-OCT).At present,the reconstruction quality at di...The nonuniform distribution of interference spectrum in wavenumber k-space is a key issue to limit the imaging quality of Fourier-domain optical coherence tomography(FD-OCT).At present,the reconstruction quality at different depths among a variety of processing methods in k-space is still uncertain.Using simulated and experimental interference spectra at different depths,the effects of common six processing methods including uniform resampling(linear interpolation(LI),cubic spline interpolation(CSI),time-domain interpolation(TDI),and K-B window convolution)and nonuniform sampling direct-reconstruction(Lomb periodogram(LP)and nonuniform discrete Fourier transform(NDFT))on the reconstruction quality of FD-OCT were quantitatively analyzed and compared in this work.The results obtained by using simulated and experimental data were coincident.From the experimental results,the averaged peak intensity,axial resolution,and signal-to-noise ratio(SNR)of NDFT at depth from 0.5 to 3.0mm were improved by about 1.9 dB,1.4 times,and 11.8 dB,respectively,compared to the averaged indices of all the uniform resampling methods at all depths.Similarly,the improvements of the above three indices of LP were 2.0 dB,1.4 times,and 11.7 dB,respectively.The analysis method and the results obtained in this work are helpful to select an appropriate processing method in k-space,so as to improve the imaging quality of FD-OCT.展开更多
Based on the observation of importance sampling and second order information about the failure surface of a structure, an importance sampling region is defined in V-space which is obtained by rotating a U-space at the...Based on the observation of importance sampling and second order information about the failure surface of a structure, an importance sampling region is defined in V-space which is obtained by rotating a U-space at the point of maximum likelihood. The sampling region is a hyper-ellipsoid that consists of the sampling ellipse on each plane of main curvature in V-space. Thus, the sampling probability density function can be constructed by the sampling region center and ellipsoid axes. Several examples have shown the efficiency and generality of this method.展开更多
In this paper, an importance sampling maximum likelihood(ISML) estimator for direction-of-arrival(DOA) of incoherently distributed(ID) sources is proposed. Starting from the maximum likelihood estimation description o...In this paper, an importance sampling maximum likelihood(ISML) estimator for direction-of-arrival(DOA) of incoherently distributed(ID) sources is proposed. Starting from the maximum likelihood estimation description of the uniform linear array(ULA), a decoupled concentrated likelihood function(CLF) is presented. A new objective function based on CLF which can obtain a closed-form solution of global maximum is constructed according to Pincus theorem. To obtain the optimal value of the objective function which is a complex high-dimensional integral,we propose an importance sampling approach based on Monte Carlo random calculation. Next, an importance function is derived, which can simplify the problem of generating random vector from a high-dimensional probability density function(PDF) to generate random variable from a one-dimensional PDF. Compared with the existing maximum likelihood(ML) algorithms for DOA estimation of ID sources, the proposed algorithm does not require initial estimates, and its performance is closer to CramerRao lower bound(CRLB). The proposed algorithm performs better than the existing methods when the interval between sources to be estimated is small and in low signal to noise ratio(SNR)scenarios.展开更多
It is assumed that the storm wave takes place once a year during the design period, and Nhistories of storm waves are generated on the basis of wave spectrum corresponding to the N-year design period. The responses of...It is assumed that the storm wave takes place once a year during the design period, and Nhistories of storm waves are generated on the basis of wave spectrum corresponding to the N-year design period. The responses of the breakwater to the N histories of storm waves in the N-year design period are calculated by mass-spring-dashpot mode and taken as a set of samples. The failure probability of caisson breakwaters during the design period of N years is obtained by the statistical analysis of many sets of samples. It is the key issue to improve the efficiency of the common Monte Carlo simulation method in the failure probability estimation of caisson breakwaters in the complete life cycle. In this paper, the kernel method of importance sampling, which can greatly increase the efficiency of failure probability calculation of caisson breakwaters, is proposed to estimate the failure probability of caisson breakwaters in the complete life cycle. The effectiveness of the kernel method is investigated by an example. It is indicated that the calculation efficiency of the kernel method is over 10 times the common Monte Carlo simulation method.展开更多
The process of changing the channel associated with the current connection while a call is in progress is under consideration. The estimation of dropping rate in handover process of a one dimensional traffic system is...The process of changing the channel associated with the current connection while a call is in progress is under consideration. The estimation of dropping rate in handover process of a one dimensional traffic system is discussed. To reduce the sample size of simulation, dropping calls at base station is considered as rare event and simulated with importance sampling - one of rare event simulation approaches. The simulation results suggest the sample size can be tremendously reduced by using importance sampling.展开更多
Value at Risk (VaR) is an important tool for estimating the risk of a financial portfolio under significant loss. Although Monte Carlo simulation is a powerful tool for estimating VaR, it is quite inefficient since th...Value at Risk (VaR) is an important tool for estimating the risk of a financial portfolio under significant loss. Although Monte Carlo simulation is a powerful tool for estimating VaR, it is quite inefficient since the event of significant loss is usually rare. Previous studies suggest that the performance of the Monte Carlo simulation can be improved by impor-tance sampling if the market returns follow the normality or the distributions. The first contribution of our paper is to extend the importance sampling method for dealing with jump-diffusion market returns, which can more precisely model the phenomenon of high peaks, heavy tails, and jumps of market returns mentioned in numerous empirical study papers. This paper also points out that for portfolios of which the huge loss is triggered by significantly distinct events, naively applying importance sampling method can result in poor performance. The second contribution of our paper is to develop the hybrid importance sampling method for the aforementioned problem. Our method decomposes a Monte Carlo simulation into sub simulations, and each sub simulation focuses only on one huge loss event. Thus the perform-ance for each sub simulation is improved by importance sampling method, and overall performance is optimized by determining the allotment of samples to each sub simulation by Lagrange’s multiplier. Numerical experiments are given to verify the superiority of our method.展开更多
The current measurement was exploited in a more efficient way. Firstly, the system equation was updated by introducing a correction term, which depends on the current measurement and can be obtained by running a subop...The current measurement was exploited in a more efficient way. Firstly, the system equation was updated by introducing a correction term, which depends on the current measurement and can be obtained by running a suboptimal filter. Then, a new importance density function(IDF) was defined by the updated system equation. Particles drawn from the new IDF are more likely to be in the significant region of state space and the estimation accuracy can be improved. By using different suboptimal filter, different particle filters(PFs) can be developed in this framework. Extensions of this idea were also proposed by iteratively updating the system equation using particle filter itself, resulting in the iterated particle filter. Simulation results demonstrate the effectiveness of the proposed IDF.展开更多
In general the accuracy of mean estimator can be improved by stratified random sampling. In this paper, we provide an idea different from empirical methods that the accuracy can be more improved through bootstrap resa...In general the accuracy of mean estimator can be improved by stratified random sampling. In this paper, we provide an idea different from empirical methods that the accuracy can be more improved through bootstrap resampling method under some conditions. The determination of sample size by bootstrap method is also discussed, and a simulation is made to verify the accuracy of the proposed method. The simulation results show that the sample size based on bootstrapping is smaller than that based on central limit theorem.展开更多
The rejection sampling method is one of the most popular methods used in Monte Carlo methods. It turns out that the standard rejection method is closely related to the problem of quasi-Monte Carlo integration of chara...The rejection sampling method is one of the most popular methods used in Monte Carlo methods. It turns out that the standard rejection method is closely related to the problem of quasi-Monte Carlo integration of characteristic functions, whose accuracy may be lost due to the discontinuity of the characteristic functions. We proposed a B-splines smoothed rejection sampling method, which smoothed the characteristic function by B-splines smoothing technique without changing the integral quantity. Numerical experiments showed that the convergence rate of nearly O( N^-1 ) is regained by using the B-splines smoothed rejection method in importance sampling.展开更多
This note introduces a method for sampling Ising models with mixed boundary conditions.As an application of annealed importance sampling and the Swendsen-Wang algorithm,the method adopts a sequence of intermediate dis...This note introduces a method for sampling Ising models with mixed boundary conditions.As an application of annealed importance sampling and the Swendsen-Wang algorithm,the method adopts a sequence of intermediate distributions that keeps the temperature fixed but turns on the boundary condition gradually.The numerical results show that the variance of the sample weights is relatively small.展开更多
In this paper, we propose a K-means clustering-based integral level-value estimation algorithm to solve a kind of box-constrained global optimization problem. For this purpose, we introduce the generalized variance fu...In this paper, we propose a K-means clustering-based integral level-value estimation algorithm to solve a kind of box-constrained global optimization problem. For this purpose, we introduce the generalized variance function associated with the level-value of the objective function to be minimized. The variance function has a good property when Newton’s method is used to solve a variance equation resulting by setting the variance function to zero. We prove that the largest root of the variance equation is equal to the global minimum value of the corresponding optimization problem. Based on the K-means clustering algorithm, the multiple importance sampling technique is proposed in the implementable algorithm. The main idea of the cross-entropy method is used to update the parameters of sampling density function. The asymptotic convergence of the algorithm is proved, and the validity of the algorithm is verified by numerical experiments.展开更多
基金supported by the Platform Development Foundation of the China Institute for Radiation Protection(No.YP21030101)the National Natural Science Foundation of China(General Program)(Nos.12175114,U2167209)+1 种基金the National Key R&D Program of China(No.2021YFF0603600)the Tsinghua University Initiative Scientific Research Program(No.20211080081).
文摘Global variance reduction is a bottleneck in Monte Carlo shielding calculations.The global variance reduction problem requires that the statistical error of the entire space is uniform.This study proposed a grid-AIS method for the global variance reduction problem based on the AIS method,which was implemented in the Monte Carlo program MCShield.The proposed method was validated using the VENUS-Ⅲ international benchmark problem and a self-shielding calculation example.The results from the VENUS-Ⅲ benchmark problem showed that the grid-AIS method achieved a significant reduction in the variance of the statistical errors of the MESH grids,decreasing from 1.08×10^(-2) to 3.84×10^(-3),representing a 64.00% reduction.This demonstrates that the grid-AIS method is effective in addressing global issues.The results of the selfshielding calculation demonstrate that the grid-AIS method produced accurate computational results.Moreover,the grid-AIS method exhibited a computational efficiency approximately one order of magnitude higher than that of the AIS method and approximately two orders of magnitude higher than that of the conventional Monte Carlo method.
基金supported by the National Natural Science Foundation of China(61372136)
文摘In order to deal with the particle degeneracy and impov- erishment problems existed in particle filters, a modified sequential importance resampling (MSIR) filter is proposed. In this filter, the resampling is translated into an evolutional process just like the biological evolution. A particle generator is constructed, which introduces the current measurement information (CMI) into the resampled particles. In the evolution, new particles are first pro- duced through the particle generator, each of which is essentially an unbiased estimation of the current true state. Then, new and old particles are recombined for the sake of raising the diversity among the particles. Finally, those particles who have low quality are eliminated. Through the evolution, all the particles retained are regarded as the optimal ones, and these particles are utilized to update the current state. By using the proposed resampling approach, not only the CMI is incorporated into each resampled particle, but also the particle degeneracy and the loss of diver- sity among the particles are mitigated, resulting in the improved estimation accuracy. Simulation results show the superiorities of the proposed filter over the standard sequential importance re- sampling (SIR) filter, auxiliary particle filter and unscented Kalman particle filter.
基金Project(61372136) supported by the National Natural Science Foundation of China
文摘The design, analysis and parallel implementation of particle filter(PF) were investigated. Firstly, to tackle the particle degeneracy problem in the PF, an iterated importance density function(IIDF) was proposed, where a new term associating with the current measurement information(CMI) was introduced into the expression of the sampled particles. Through the repeated use of the least squares estimate, the CMI can be integrated into the sampling stage in an iterative manner, conducing to the greatly improved sampling quality. By running the IIDF, an iterated PF(IPF) can be obtained. Subsequently, a parallel resampling(PR) was proposed for the purpose of parallel implementation of IPF, whose main idea was the same as systematic resampling(SR) but performed differently. The PR directly used the integral part of the product of the particle weight and particle number as the number of times that a particle was replicated, and it simultaneously eliminated the particles with the smallest weights, which are the two key differences from the SR. The detailed implementation procedures on the graphics processing unit of IPF based on the PR were presented at last. The performance of the IPF, PR and their parallel implementations are illustrated via one-dimensional numerical simulation and practical application of passive radar target tracking.
基金supported by the National Natural Science Foundation of China under Grant Nos.52105136,51975028China Postdoctoral Science Foundation under Grant[No.2021M690290]the National Science and TechnologyMajor Project under Grant No.J2019-IV-0002-0069.
文摘The reliability and sensitivity analyses of stator blade regulator usually involve complex characteristics like highnonlinearity,multi-failure regions,and small failure probability,which brings in unacceptable computing efficiency and accuracy of the current analysismethods.In this case,by fitting the implicit limit state function(LSF)with active Kriging(AK)model and reducing candidate sample poolwith adaptive importance sampling(AIS),a novel AK-AIS method is proposed.Herein,theAKmodel andMarkov chainMonte Carlo(MCMC)are first established to identify the most probable failure region(s)(MPFRs),and the adaptive kernel density estimation(AKDE)importance sampling function is constructed to select the candidate samples.With the best samples sequentially attained in the reduced candidate samples and employed to update the Kriging-fitted LSF,the failure probability and sensitivity indices are acquired at a lower cost.The proposed method is verified by twomulti-failure numerical examples,and then applied to the reliability and sensitivity analyses of a typical stator blade regulator.Withmethods comparison,the proposed AK-AIS is proven to hold the computing advantages on accuracy and efficiency in complex reliability and sensitivity analysis problems.
基金supported by the National Natural Science Foundation of China(Grant Nos.61575205 and 62175022)Sichuan Natural Science Foundation(2022NSFSC0803)Sichuan Science and Technology Program(2021JDRC0035).
文摘The nonuniform distribution of interference spectrum in wavenumber k-space is a key issue to limit the imaging quality of Fourier-domain optical coherence tomography(FD-OCT).At present,the reconstruction quality at different depths among a variety of processing methods in k-space is still uncertain.Using simulated and experimental interference spectra at different depths,the effects of common six processing methods including uniform resampling(linear interpolation(LI),cubic spline interpolation(CSI),time-domain interpolation(TDI),and K-B window convolution)and nonuniform sampling direct-reconstruction(Lomb periodogram(LP)and nonuniform discrete Fourier transform(NDFT))on the reconstruction quality of FD-OCT were quantitatively analyzed and compared in this work.The results obtained by using simulated and experimental data were coincident.From the experimental results,the averaged peak intensity,axial resolution,and signal-to-noise ratio(SNR)of NDFT at depth from 0.5 to 3.0mm were improved by about 1.9 dB,1.4 times,and 11.8 dB,respectively,compared to the averaged indices of all the uniform resampling methods at all depths.Similarly,the improvements of the above three indices of LP were 2.0 dB,1.4 times,and 11.7 dB,respectively.The analysis method and the results obtained in this work are helpful to select an appropriate processing method in k-space,so as to improve the imaging quality of FD-OCT.
文摘Based on the observation of importance sampling and second order information about the failure surface of a structure, an importance sampling region is defined in V-space which is obtained by rotating a U-space at the point of maximum likelihood. The sampling region is a hyper-ellipsoid that consists of the sampling ellipse on each plane of main curvature in V-space. Thus, the sampling probability density function can be constructed by the sampling region center and ellipsoid axes. Several examples have shown the efficiency and generality of this method.
基金supported by the basic research program of Natural Science in Shannxi province of China (2021JQ-369)。
文摘In this paper, an importance sampling maximum likelihood(ISML) estimator for direction-of-arrival(DOA) of incoherently distributed(ID) sources is proposed. Starting from the maximum likelihood estimation description of the uniform linear array(ULA), a decoupled concentrated likelihood function(CLF) is presented. A new objective function based on CLF which can obtain a closed-form solution of global maximum is constructed according to Pincus theorem. To obtain the optimal value of the objective function which is a complex high-dimensional integral,we propose an importance sampling approach based on Monte Carlo random calculation. Next, an importance function is derived, which can simplify the problem of generating random vector from a high-dimensional probability density function(PDF) to generate random variable from a one-dimensional PDF. Compared with the existing maximum likelihood(ML) algorithms for DOA estimation of ID sources, the proposed algorithm does not require initial estimates, and its performance is closer to CramerRao lower bound(CRLB). The proposed algorithm performs better than the existing methods when the interval between sources to be estimated is small and in low signal to noise ratio(SNR)scenarios.
基金financially supported by the National Natural Science Foundation of China(Grant No.51279128)the Innovative Research Groups Science Foundation of China(Grant No.51321065)the Construction Science and Technology Project of Ministry of Transport of the People's Republic of China(Grant No.2013328224070)
文摘It is assumed that the storm wave takes place once a year during the design period, and Nhistories of storm waves are generated on the basis of wave spectrum corresponding to the N-year design period. The responses of the breakwater to the N histories of storm waves in the N-year design period are calculated by mass-spring-dashpot mode and taken as a set of samples. The failure probability of caisson breakwaters during the design period of N years is obtained by the statistical analysis of many sets of samples. It is the key issue to improve the efficiency of the common Monte Carlo simulation method in the failure probability estimation of caisson breakwaters in the complete life cycle. In this paper, the kernel method of importance sampling, which can greatly increase the efficiency of failure probability calculation of caisson breakwaters, is proposed to estimate the failure probability of caisson breakwaters in the complete life cycle. The effectiveness of the kernel method is investigated by an example. It is indicated that the calculation efficiency of the kernel method is over 10 times the common Monte Carlo simulation method.
文摘The process of changing the channel associated with the current connection while a call is in progress is under consideration. The estimation of dropping rate in handover process of a one dimensional traffic system is discussed. To reduce the sample size of simulation, dropping calls at base station is considered as rare event and simulated with importance sampling - one of rare event simulation approaches. The simulation results suggest the sample size can be tremendously reduced by using importance sampling.
文摘Value at Risk (VaR) is an important tool for estimating the risk of a financial portfolio under significant loss. Although Monte Carlo simulation is a powerful tool for estimating VaR, it is quite inefficient since the event of significant loss is usually rare. Previous studies suggest that the performance of the Monte Carlo simulation can be improved by impor-tance sampling if the market returns follow the normality or the distributions. The first contribution of our paper is to extend the importance sampling method for dealing with jump-diffusion market returns, which can more precisely model the phenomenon of high peaks, heavy tails, and jumps of market returns mentioned in numerous empirical study papers. This paper also points out that for portfolios of which the huge loss is triggered by significantly distinct events, naively applying importance sampling method can result in poor performance. The second contribution of our paper is to develop the hybrid importance sampling method for the aforementioned problem. Our method decomposes a Monte Carlo simulation into sub simulations, and each sub simulation focuses only on one huge loss event. Thus the perform-ance for each sub simulation is improved by importance sampling method, and overall performance is optimized by determining the allotment of samples to each sub simulation by Lagrange’s multiplier. Numerical experiments are given to verify the superiority of our method.
基金Project(61271296) supported by the National Natural Science Foundation of China
文摘The current measurement was exploited in a more efficient way. Firstly, the system equation was updated by introducing a correction term, which depends on the current measurement and can be obtained by running a suboptimal filter. Then, a new importance density function(IDF) was defined by the updated system equation. Particles drawn from the new IDF are more likely to be in the significant region of state space and the estimation accuracy can be improved. By using different suboptimal filter, different particle filters(PFs) can be developed in this framework. Extensions of this idea were also proposed by iteratively updating the system equation using particle filter itself, resulting in the iterated particle filter. Simulation results demonstrate the effectiveness of the proposed IDF.
基金The Science Research Start-up Foundation for Young Teachers of Southwest Jiaotong University(No.2007Q091)
文摘In general the accuracy of mean estimator can be improved by stratified random sampling. In this paper, we provide an idea different from empirical methods that the accuracy can be more improved through bootstrap resampling method under some conditions. The determination of sample size by bootstrap method is also discussed, and a simulation is made to verify the accuracy of the proposed method. The simulation results show that the sample size based on bootstrapping is smaller than that based on central limit theorem.
文摘The rejection sampling method is one of the most popular methods used in Monte Carlo methods. It turns out that the standard rejection method is closely related to the problem of quasi-Monte Carlo integration of characteristic functions, whose accuracy may be lost due to the discontinuity of the characteristic functions. We proposed a B-splines smoothed rejection sampling method, which smoothed the characteristic function by B-splines smoothing technique without changing the integral quantity. Numerical experiments showed that the convergence rate of nearly O( N^-1 ) is regained by using the B-splines smoothed rejection method in importance sampling.
文摘This note introduces a method for sampling Ising models with mixed boundary conditions.As an application of annealed importance sampling and the Swendsen-Wang algorithm,the method adopts a sequence of intermediate distributions that keeps the temperature fixed but turns on the boundary condition gradually.The numerical results show that the variance of the sample weights is relatively small.
文摘In this paper, we propose a K-means clustering-based integral level-value estimation algorithm to solve a kind of box-constrained global optimization problem. For this purpose, we introduce the generalized variance function associated with the level-value of the objective function to be minimized. The variance function has a good property when Newton’s method is used to solve a variance equation resulting by setting the variance function to zero. We prove that the largest root of the variance equation is equal to the global minimum value of the corresponding optimization problem. Based on the K-means clustering algorithm, the multiple importance sampling technique is proposed in the implementable algorithm. The main idea of the cross-entropy method is used to update the parameters of sampling density function. The asymptotic convergence of the algorithm is proved, and the validity of the algorithm is verified by numerical experiments.