Image processing and image analysis are the main aspects for obtaining information from digital image owing to the fact that this techniques give the desired details in most of the applications generally and Non-Destr...Image processing and image analysis are the main aspects for obtaining information from digital image owing to the fact that this techniques give the desired details in most of the applications generally and Non-Destructive testing specifically. This paper presents a proposed method for the automatic detection of weld defects in radiographic images. Firstly, the radiographic images were enhanced using adaptive histogram equalization and are filtered using mean and wiener filters. Secondly, the welding area is selected from the radiography image. Thirdly, the Cepstral features are extracted from the Higher-Order Spectra (Bispectrum and Trispectrum). Finally, neural networks are used for feature matching. The proposed method is tested using 100 radiographic images in the presence of noise and image blurring. Results show that in spite of time consumption, the proposed method yields best results for the automatic detection of weld defects in radiography images when the features were extracted from the Trispectrum of the image.展开更多
Estimation for the parameters of the generalized logistic distribution (GLD) is obtained based on record statistics from a Bayesian and non-Bayesian approach. The Bayes estimators cannot be obtained in explicit forms....Estimation for the parameters of the generalized logistic distribution (GLD) is obtained based on record statistics from a Bayesian and non-Bayesian approach. The Bayes estimators cannot be obtained in explicit forms. So the Markov chain Monte Carlo (MCMC) algorithms are used for computing the Bayes estimates. Point estimation and confidence intervals based on maximum likelihood and the parametric bootstrap methods are proposed for estimating the unknown parameters. A numerical example has been analyzed for illustrative purposes. Comparisons are made between Bayesian and maximum likelihood estimators via Monte Carlo simulation.展开更多
This study aimed to examine the performance of the Siegel-Tukey and Savage tests on data sets with heterogeneous variances. The analysis, considering Normal, Platykurtic, and Skewed distributions and a standard deviat...This study aimed to examine the performance of the Siegel-Tukey and Savage tests on data sets with heterogeneous variances. The analysis, considering Normal, Platykurtic, and Skewed distributions and a standard deviation ratio of 1, was conducted for both small and large sample sizes. For small sample sizes, two main categories were established: equal and different sample sizes. Analyses were performed using Monte Carlo simulations with 20,000 repetitions for each scenario, and the simulations were evaluated using SAS software. For small sample sizes, the I. type error rate of the Siegel-Tukey test generally ranged from 0.045 to 0.055, while the I. type error rate of the Savage test was observed to range from 0.016 to 0.041. Similar trends were observed for Platykurtic and Skewed distributions. In scenarios with different sample sizes, the Savage test generally exhibited lower I. type error rates. For large sample sizes, two main categories were established: equal and different sample sizes. For large sample sizes, the I. type error rate of the Siegel-Tukey test ranged from 0.047 to 0.052, while the I. type error rate of the Savage test ranged from 0.043 to 0.051. In cases of equal sample sizes, both tests generally had lower error rates, with the Savage test providing more consistent results for large sample sizes. In conclusion, it was determined that the Savage test provides lower I. type error rates for small sample sizes and that both tests have similar error rates for large sample sizes. These findings suggest that the Savage test could be a more reliable option when analyzing variance differences.展开更多
The law of mass action, based on maxwellian statistics, cannot explain recent epicatalysis experiments but does when generalized to non-maxwellian statistics. Challenges to the second law are traced to statistical het...The law of mass action, based on maxwellian statistics, cannot explain recent epicatalysis experiments but does when generalized to non-maxwellian statistics. Challenges to the second law are traced to statistical heterogeneity that falls outside assumptions of homogeneity and indistinguishability made by Boltzmann, Gibbs, Tolman and Von Neumann in their H-Theorems. Epicatalysis operates outside these assumptions. Hence, H-Theorems do not apply to it and the second law is bypassed, not broken. There is no contradiction with correctly understood established physics. Other phenomena also based on heterogeneous statistics include non-maxwellian adsorption, the field-induced thermoelectric effect and the reciprocal Hall effect. Elementary particles have well known distributions such as Fermi-Dirac and Bose Einstein, but composite particles such as those involved in chemical reactions, have complex intractable statistics not necessarily maxwellian and best determined by quantum modeling methods. A step by step solution for finding the quantum thermodynamic properties of a quantum composite gas, that avoids the computational requirement of modeling a large number of composite particles includes 1) quantum molecular modeling of a few particles, 2) determining their available microstates, 3) producing their partition function, 4) generating their statistics, and 5) producing the epicatalytic parameter for the generalized law of mass action.展开更多
Choosing appropriate statistical tests is crucial but deciding which tests to use can be challenging. Different tests suit different types of data and research questions, so it is important to choose the right one. Kn...Choosing appropriate statistical tests is crucial but deciding which tests to use can be challenging. Different tests suit different types of data and research questions, so it is important to choose the right one. Knowing how to select an appropriate test can lead to more accurate results. Invalid results and misleading conclusions may be drawn from a study if an incorrect statistical test is used. Therefore, to avoid these it is essential to understand the nature of the data, the research question, and the assumptions of the tests before selecting one. This is because there are a wide variety of tests available. This paper provides a step-by-step approach to selecting the right statistical test for any study, with an explanation of when it is appropriate to use it and relevant examples of each statistical test. Furthermore, this guide provides a comprehensive overview of the assumptions of each test and what to do if these assumptions are violated.展开更多
Choosing appropriate statistical tests is crucial but deciding which tests to use can be challenging. Different tests suit different types of data and research questions, so it is important to choose the right one. Kn...Choosing appropriate statistical tests is crucial but deciding which tests to use can be challenging. Different tests suit different types of data and research questions, so it is important to choose the right one. Knowing how to select an appropriate test can lead to more accurate results. Invalid results and misleading conclusions may be drawn from a study if an incorrect statistical test is used. Therefore, to avoid these it is essential to understand the nature of the data, the research question, and the assumptions of the tests before selecting one. This is because there are a wide variety of tests available. This paper provides a step-by-step approach to selecting the right statistical test for any study, with an explanation of when it is appropriate to use it and relevant examples of each statistical test. Furthermore, this guide provides a comprehensive overview of the assumptions of each test and what to do if these assumptions are violated.展开更多
Let X 1, ..., X n be independent and identically distributed random variables and W n = W n (X 1, ..., X n ) be an estimator of parameter ?. Denote T n = (W n ? ? 0)/s n , where s n 2 is a variance estimator of W n . ...Let X 1, ..., X n be independent and identically distributed random variables and W n = W n (X 1, ..., X n ) be an estimator of parameter ?. Denote T n = (W n ? ? 0)/s n , where s n 2 is a variance estimator of W n . In this paper a general result on the limiting distributions of the non-central studentized statistic T n is given. Especially, when s n 2 is the jacknife estimate of variance, it is shown that the limit could be normal, a weighted χ 2 distribution, a stable distribution, or a mixture of normal and stable distribution. Applications to the power of the studentized U- and L- tests are also discussed.展开更多
Estimation of the bivariate survival function under the competing risks caseis considered.We give an explicit formula for the estimator from a decomposition of thebivariate survival function based on competing risks,w...Estimation of the bivariate survival function under the competing risks caseis considered.We give an explicit formula for the estimator from a decomposition of thebivariate survival function based on competing risks,which is almost sure consistent.展开更多
In this paper, the following contaminated linear model is considered:y i=(1-ε)x τ iβ+z i, 1≤i≤n,where r.v.'s { y i } are contaminated with errors { z i }. To assume that the errors have the fin...In this paper, the following contaminated linear model is considered:y i=(1-ε)x τ iβ+z i, 1≤i≤n,where r.v.'s { y i } are contaminated with errors { z i }. To assume that the errors have the finite moment of order 2 only. The non parametric estimation of contaminated coefficient ε and regression parameter β are established, and the strong consistency and convergence rate almost surely of the estimators are obtained. A simulated example is also given to show the visual performance of the estimations.展开更多
The angular spectrum gain characters and the power magnification characters of high gain non-walk-off colinear optical parametric oscillators have been studied using the non-colinear phase match method for the first t...The angular spectrum gain characters and the power magnification characters of high gain non-walk-off colinear optical parametric oscillators have been studied using the non-colinear phase match method for the first time. The experimental results of the KTiOAs04 and the KTiOP04 crystals are discussed in detail. At the high energy single resonant condition, low reflective ratio of the output mirror for the signal and long non-linear crystal are beneficial for small divergence angles. This method can also be used for other high gain non-walk-off phase match optical parametric orocesses.展开更多
Short-term traffic flow is one of the core technologies to realize traffic flow guidance. In this article, in view of the characteristics that the traffic flow changes repeatedly, a short-term traffic flow forecasting...Short-term traffic flow is one of the core technologies to realize traffic flow guidance. In this article, in view of the characteristics that the traffic flow changes repeatedly, a short-term traffic flow forecasting method based on a three-layer K-nearest neighbor non-parametric regression algorithm is proposed. Specifically, two screening layers based on shape similarity were introduced in K-nearest neighbor non-parametric regression method, and the forecasting results were output using the weighted averaging on the reciprocal values of the shape similarity distances and the most-similar-point distance adjustment method. According to the experimental results, the proposed algorithm has improved the predictive ability of the traditional K-nearest neighbor non-parametric regression method, and greatly enhanced the accuracy and real-time performance of short-term traffic flow forecasting.展开更多
The solution of the time-dependent periodic pumping non-degenerate optical parametric amplifier (NOPA) is derived when the pump depletion is considered both above and below thresholds. Based on this solution, the qu...The solution of the time-dependent periodic pumping non-degenerate optical parametric amplifier (NOPA) is derived when the pump depletion is considered both above and below thresholds. Based on this solution, the quantum fluctuation calculated shows that a high entanglement and a good squeezing degree of the parametric light beams are achieved near and above thresholds. We adopt two kinds of pump fields: (i) a continuously modulated pump with a sinusoidal envelope; (ii) a sequence of laser pulses with Gaussian envelopes. We analyse the time evolution of continuous variable entanglement by analytical and numerical calculations, and show that the periodic driven pumping also improves the degree of entanglement. The squeezing and Einstein-Podolsky-Rosen (EPR) entanglement by using the two pumping driven functions are investigated from below to above the threshold regions, the tendencies are nearly the same, and the Caussian driven function is superior to that of the sine driven function, when the maximum squeezing and the minimum variance of quantum fluctuation are considered. In the meantime, the generalization of frequency degenerate OPA to frequency non-degenerated OPA problem is investigated.展开更多
This paper applies the minimum variance V1 criterion to monitor the evolution of signal and idler modes of a composite non-degenerate optical parametric amplification (NOPA) system. The analytics and numerical calcu...This paper applies the minimum variance V1 criterion to monitor the evolution of signal and idler modes of a composite non-degenerate optical parametric amplification (NOPA) system. The analytics and numerical calculation show the influence of the transition time, the vacuum fluctuations, and the thermal noise level on the EPR entanglement of the composite NOPA system. It finds that the entanglement and the squeezing degrade as the minimum variance V1 increases.展开更多
A quantitative study was used in the study of the tendency to change drought indicators in Vietnam through the Ninh Thuan province case study. The research data are temperature and precipitation data of 11 stations fr...A quantitative study was used in the study of the tendency to change drought indicators in Vietnam through the Ninh Thuan province case study. The research data are temperature and precipitation data of 11 stations from 1986 to 2016 inside and outside Ninh Thuan province. To do the research, the author uses a non-parametric analysis method and the drought index calculation method. Specifically, with the non-parametric method, the author uses the analysis, Mann-Kendall (MK) and Theil-Sen (Sen’s slope), and to analyze drought, the author uses the Standardized Precipitation Index (SPI) and the Moisture Index (MI). Two Softwares calculated in this study are ProUCL 5.1 and MAKENSEN 1.0 by the US Environmental Protection Agency and Finnish Meteorological Institute. The calculation results show that meteorological drought will decrease in the future with areas such as Phan Rang, Song Pha, Quan The, Ba Thap tend to increase very clearly, while Tam My and Nhi Ha tend to increase very clearly short. With the agricultural drought, the average MI results increased 0.013 per year, of which Song Pha station tended to increase the highest with 0.03 per year and lower with Nhi Ha with 0.001 per year. The forecast results also show that by the end of the 21st century, the SPI tends to decrease with SPI 1 being <span style="white-space:nowrap;">−</span>0.68, SPI 3 being <span style="white-space:nowrap;">−</span>0.40, SPI 6 being <span style="white-space:nowrap;">−</span>0.25, SPI 12 is 0.42. Along with that is the forecast that the MI index will increase 0.013 per year to 2035, the MI index is 0.93, in 2050 it is 1.13, in 2075 it will be 1.46, and by 2100 it is 1.79. Research results will be used in policymaking, environmental resources management agencies, and researchers to develop and study solutions to adapt and mitigate drought in the context of variable climate change.展开更多
In this paper, the solution of the time-dependent Fokker-Planck equation of non-degenerate optical parametric amplification is used to deduce the condition demonstrating the Einstein-Podolsky-Rosen (EPR) paradox. Th...In this paper, the solution of the time-dependent Fokker-Planck equation of non-degenerate optical parametric amplification is used to deduce the condition demonstrating the Einstein-Podolsky-Rosen (EPR) paradox. The analytics and numerical calculation show the influence of pump depletion on the error in the measurement of continuous variables. The optimum realization of EPR paradox can be achieved by adjusting the parameter of squeezing. This result is of practical importance when the realistic experimental conditions are taken into consideration .展开更多
Femtosecond time-resolved fluorescence non-collinear optical parametric amplification spectroscopy (FNOPAS) is a versatile technique with advantages of high sensitivity, broad detection bandwidth, and intrinsic spec...Femtosecond time-resolved fluorescence non-collinear optical parametric amplification spectroscopy (FNOPAS) is a versatile technique with advantages of high sensitivity, broad detection bandwidth, and intrinsic spectrum correction func- tion. These advantages should benefit the study of coherent emission, such as measurement oflasing dynamics. In this letter, the FNOPAS was used to trace the lasing process in Rhodamine 6G (R6G) solution and organic semiconductor nano-wires. High-quality transient emission spectra and lasing dynamic traces were acquired, which demonstrates the applicability of FNOPAS in the study of lasing dynamics. Our work extends the application scope of the FNOPAS technique.展开更多
The effect of treatment on patient’s outcome can easily be determined through the impact of the treatment on biological events. Observing the treatment for patients for a certain period of time can help in determinin...The effect of treatment on patient’s outcome can easily be determined through the impact of the treatment on biological events. Observing the treatment for patients for a certain period of time can help in determining whether there is any change in the biomarker of the patient. It is important to study how the biomarker changes due to treatment and whether for different individuals located in separate centers can be clustered together since they might have different distributions. The study is motivated by a Bayesian non-parametric mixture model, which is more flexible when compared to the Bayesian Parametric models and is capable of borrowing information across different centers allowing them to be grouped together. To this end, this research modeled Biological markers taking into consideration the Surrogate markers. The study employed the nested Dirichlet process prior, which is easily peaceable on different distributions for several centers, with centers from the same Dirichlet process component clustered automatically together. The study sampled from the posterior by use of Markov chain Monte carol algorithm. The model is illustrated using a simulation study to see how it performs on simulated data. Clearly, from the simulation study it was clear that, the model was capable of clustering data into different clusters.展开更多
Longitudinal trends of observations can be estimated using the generalized multivariate analysis of variance (GMANOVA) model proposed by [10]. In the present paper, we consider estimating the trends nonparametrically ...Longitudinal trends of observations can be estimated using the generalized multivariate analysis of variance (GMANOVA) model proposed by [10]. In the present paper, we consider estimating the trends nonparametrically using known basis functions. Then, as in nonparametric regression, an overfitting problem occurs. [13] showed that the GMANOVA model is equivalent to the varying coefficient model with non-longitudinal covariates. Hence, as in the case of the ordinary linear regression model, when the number of covariates becomes large, the estimator of the varying coefficient becomes unstable. In the present paper, we avoid the overfitting problem and the instability problem by applying the concept behind penalized smoothing spline regression and multivariate generalized ridge regression. In addition, we propose two criteria to optimize hyper parameters, namely, a smoothing parameter and ridge parameters. Finally, we compare the ordinary least square estimator and the new estimator.展开更多
This paper investigates the random responses of a TDOF structure with strongly nonlinear coupling and parametric vibration. With the nonlinear cou- pling of inertia in the equations of motion of the system being remov...This paper investigates the random responses of a TDOF structure with strongly nonlinear coupling and parametric vibration. With the nonlinear cou- pling of inertia in the equations of motion of the system being removed by successive elimination, the non-Gaussian moment equation method (NGM) is applied and 69 moment equations are integrated with central cumulative truncation technique. The stochastic central difference-cum-statistical linearization method(SCD-SL) and the digital simulation method(DSM) are also used. A comparison of results by different methods are given and the SCD-SL method is the most efficient method.展开更多
Modeling dynamic systems with linear parametric models usually suffer limitation which affects forecasting performance and policy implications. This paper advances a non-parametric autoregressive distributed lag model...Modeling dynamic systems with linear parametric models usually suffer limitation which affects forecasting performance and policy implications. This paper advances a non-parametric autoregressive distributed lag model that employs a Bayesian additive regression tree (BART). The performance of the BART model is compared with selection models like Lasso, Elastic Net, and Bayesian networks in simulation experiments with linear and non-linear data generating processes (DGP), and on US macroeconomic time series data. The results show that the BART model is quite competitive against the linear parametric methods when the DGP is linear, and outperforms the competing methods when the DGP is non-linear. The empirical results suggest that the BART estimators are generally more efficient than the traditional linear methods when modeling and forecasting macroeconomic time series.展开更多
文摘Image processing and image analysis are the main aspects for obtaining information from digital image owing to the fact that this techniques give the desired details in most of the applications generally and Non-Destructive testing specifically. This paper presents a proposed method for the automatic detection of weld defects in radiographic images. Firstly, the radiographic images were enhanced using adaptive histogram equalization and are filtered using mean and wiener filters. Secondly, the welding area is selected from the radiography image. Thirdly, the Cepstral features are extracted from the Higher-Order Spectra (Bispectrum and Trispectrum). Finally, neural networks are used for feature matching. The proposed method is tested using 100 radiographic images in the presence of noise and image blurring. Results show that in spite of time consumption, the proposed method yields best results for the automatic detection of weld defects in radiography images when the features were extracted from the Trispectrum of the image.
文摘Estimation for the parameters of the generalized logistic distribution (GLD) is obtained based on record statistics from a Bayesian and non-Bayesian approach. The Bayes estimators cannot be obtained in explicit forms. So the Markov chain Monte Carlo (MCMC) algorithms are used for computing the Bayes estimates. Point estimation and confidence intervals based on maximum likelihood and the parametric bootstrap methods are proposed for estimating the unknown parameters. A numerical example has been analyzed for illustrative purposes. Comparisons are made between Bayesian and maximum likelihood estimators via Monte Carlo simulation.
文摘This study aimed to examine the performance of the Siegel-Tukey and Savage tests on data sets with heterogeneous variances. The analysis, considering Normal, Platykurtic, and Skewed distributions and a standard deviation ratio of 1, was conducted for both small and large sample sizes. For small sample sizes, two main categories were established: equal and different sample sizes. Analyses were performed using Monte Carlo simulations with 20,000 repetitions for each scenario, and the simulations were evaluated using SAS software. For small sample sizes, the I. type error rate of the Siegel-Tukey test generally ranged from 0.045 to 0.055, while the I. type error rate of the Savage test was observed to range from 0.016 to 0.041. Similar trends were observed for Platykurtic and Skewed distributions. In scenarios with different sample sizes, the Savage test generally exhibited lower I. type error rates. For large sample sizes, two main categories were established: equal and different sample sizes. For large sample sizes, the I. type error rate of the Siegel-Tukey test ranged from 0.047 to 0.052, while the I. type error rate of the Savage test ranged from 0.043 to 0.051. In cases of equal sample sizes, both tests generally had lower error rates, with the Savage test providing more consistent results for large sample sizes. In conclusion, it was determined that the Savage test provides lower I. type error rates for small sample sizes and that both tests have similar error rates for large sample sizes. These findings suggest that the Savage test could be a more reliable option when analyzing variance differences.
文摘The law of mass action, based on maxwellian statistics, cannot explain recent epicatalysis experiments but does when generalized to non-maxwellian statistics. Challenges to the second law are traced to statistical heterogeneity that falls outside assumptions of homogeneity and indistinguishability made by Boltzmann, Gibbs, Tolman and Von Neumann in their H-Theorems. Epicatalysis operates outside these assumptions. Hence, H-Theorems do not apply to it and the second law is bypassed, not broken. There is no contradiction with correctly understood established physics. Other phenomena also based on heterogeneous statistics include non-maxwellian adsorption, the field-induced thermoelectric effect and the reciprocal Hall effect. Elementary particles have well known distributions such as Fermi-Dirac and Bose Einstein, but composite particles such as those involved in chemical reactions, have complex intractable statistics not necessarily maxwellian and best determined by quantum modeling methods. A step by step solution for finding the quantum thermodynamic properties of a quantum composite gas, that avoids the computational requirement of modeling a large number of composite particles includes 1) quantum molecular modeling of a few particles, 2) determining their available microstates, 3) producing their partition function, 4) generating their statistics, and 5) producing the epicatalytic parameter for the generalized law of mass action.
文摘Choosing appropriate statistical tests is crucial but deciding which tests to use can be challenging. Different tests suit different types of data and research questions, so it is important to choose the right one. Knowing how to select an appropriate test can lead to more accurate results. Invalid results and misleading conclusions may be drawn from a study if an incorrect statistical test is used. Therefore, to avoid these it is essential to understand the nature of the data, the research question, and the assumptions of the tests before selecting one. This is because there are a wide variety of tests available. This paper provides a step-by-step approach to selecting the right statistical test for any study, with an explanation of when it is appropriate to use it and relevant examples of each statistical test. Furthermore, this guide provides a comprehensive overview of the assumptions of each test and what to do if these assumptions are violated.
文摘Choosing appropriate statistical tests is crucial but deciding which tests to use can be challenging. Different tests suit different types of data and research questions, so it is important to choose the right one. Knowing how to select an appropriate test can lead to more accurate results. Invalid results and misleading conclusions may be drawn from a study if an incorrect statistical test is used. Therefore, to avoid these it is essential to understand the nature of the data, the research question, and the assumptions of the tests before selecting one. This is because there are a wide variety of tests available. This paper provides a step-by-step approach to selecting the right statistical test for any study, with an explanation of when it is appropriate to use it and relevant examples of each statistical test. Furthermore, this guide provides a comprehensive overview of the assumptions of each test and what to do if these assumptions are violated.
基金supported in part by Hong Kong UST (Grant No. DAG05/06.SC)Hong Kong RGC CERG(Grant No. 602206)+1 种基金supported by National Natural Science Foundation (Grant No.10801118)the PhD Programs Foundation of the Ministry of Education of China (Grant No. 200803351094)
文摘Let X 1, ..., X n be independent and identically distributed random variables and W n = W n (X 1, ..., X n ) be an estimator of parameter ?. Denote T n = (W n ? ? 0)/s n , where s n 2 is a variance estimator of W n . In this paper a general result on the limiting distributions of the non-central studentized statistic T n is given. Especially, when s n 2 is the jacknife estimate of variance, it is shown that the limit could be normal, a weighted χ 2 distribution, a stable distribution, or a mixture of normal and stable distribution. Applications to the power of the studentized U- and L- tests are also discussed.
文摘Estimation of the bivariate survival function under the competing risks caseis considered.We give an explicit formula for the estimator from a decomposition of thebivariate survival function based on competing risks,which is almost sure consistent.
文摘In this paper, the following contaminated linear model is considered:y i=(1-ε)x τ iβ+z i, 1≤i≤n,where r.v.'s { y i } are contaminated with errors { z i }. To assume that the errors have the finite moment of order 2 only. The non parametric estimation of contaminated coefficient ε and regression parameter β are established, and the strong consistency and convergence rate almost surely of the estimators are obtained. A simulated example is also given to show the visual performance of the estimations.
文摘The angular spectrum gain characters and the power magnification characters of high gain non-walk-off colinear optical parametric oscillators have been studied using the non-colinear phase match method for the first time. The experimental results of the KTiOAs04 and the KTiOP04 crystals are discussed in detail. At the high energy single resonant condition, low reflective ratio of the output mirror for the signal and long non-linear crystal are beneficial for small divergence angles. This method can also be used for other high gain non-walk-off phase match optical parametric orocesses.
文摘Short-term traffic flow is one of the core technologies to realize traffic flow guidance. In this article, in view of the characteristics that the traffic flow changes repeatedly, a short-term traffic flow forecasting method based on a three-layer K-nearest neighbor non-parametric regression algorithm is proposed. Specifically, two screening layers based on shape similarity were introduced in K-nearest neighbor non-parametric regression method, and the forecasting results were output using the weighted averaging on the reciprocal values of the shape similarity distances and the most-similar-point distance adjustment method. According to the experimental results, the proposed algorithm has improved the predictive ability of the traditional K-nearest neighbor non-parametric regression method, and greatly enhanced the accuracy and real-time performance of short-term traffic flow forecasting.
基金Project supported by the Natural Science Foundation of Shanxi Province, China (Grant No 2006011003)
文摘The solution of the time-dependent periodic pumping non-degenerate optical parametric amplifier (NOPA) is derived when the pump depletion is considered both above and below thresholds. Based on this solution, the quantum fluctuation calculated shows that a high entanglement and a good squeezing degree of the parametric light beams are achieved near and above thresholds. We adopt two kinds of pump fields: (i) a continuously modulated pump with a sinusoidal envelope; (ii) a sequence of laser pulses with Gaussian envelopes. We analyse the time evolution of continuous variable entanglement by analytical and numerical calculations, and show that the periodic driven pumping also improves the degree of entanglement. The squeezing and Einstein-Podolsky-Rosen (EPR) entanglement by using the two pumping driven functions are investigated from below to above the threshold regions, the tendencies are nearly the same, and the Caussian driven function is superior to that of the sine driven function, when the maximum squeezing and the minimum variance of quantum fluctuation are considered. In the meantime, the generalization of frequency degenerate OPA to frequency non-degenerated OPA problem is investigated.
基金Project supported by the Natural Science Foundation of Shanxi Province,China (Grant No. 2006011003)
文摘This paper applies the minimum variance V1 criterion to monitor the evolution of signal and idler modes of a composite non-degenerate optical parametric amplification (NOPA) system. The analytics and numerical calculation show the influence of the transition time, the vacuum fluctuations, and the thermal noise level on the EPR entanglement of the composite NOPA system. It finds that the entanglement and the squeezing degrade as the minimum variance V1 increases.
文摘A quantitative study was used in the study of the tendency to change drought indicators in Vietnam through the Ninh Thuan province case study. The research data are temperature and precipitation data of 11 stations from 1986 to 2016 inside and outside Ninh Thuan province. To do the research, the author uses a non-parametric analysis method and the drought index calculation method. Specifically, with the non-parametric method, the author uses the analysis, Mann-Kendall (MK) and Theil-Sen (Sen’s slope), and to analyze drought, the author uses the Standardized Precipitation Index (SPI) and the Moisture Index (MI). Two Softwares calculated in this study are ProUCL 5.1 and MAKENSEN 1.0 by the US Environmental Protection Agency and Finnish Meteorological Institute. The calculation results show that meteorological drought will decrease in the future with areas such as Phan Rang, Song Pha, Quan The, Ba Thap tend to increase very clearly, while Tam My and Nhi Ha tend to increase very clearly short. With the agricultural drought, the average MI results increased 0.013 per year, of which Song Pha station tended to increase the highest with 0.03 per year and lower with Nhi Ha with 0.001 per year. The forecast results also show that by the end of the 21st century, the SPI tends to decrease with SPI 1 being <span style="white-space:nowrap;">−</span>0.68, SPI 3 being <span style="white-space:nowrap;">−</span>0.40, SPI 6 being <span style="white-space:nowrap;">−</span>0.25, SPI 12 is 0.42. Along with that is the forecast that the MI index will increase 0.013 per year to 2035, the MI index is 0.93, in 2050 it is 1.13, in 2075 it will be 1.46, and by 2100 it is 1.79. Research results will be used in policymaking, environmental resources management agencies, and researchers to develop and study solutions to adapt and mitigate drought in the context of variable climate change.
文摘In this paper, the solution of the time-dependent Fokker-Planck equation of non-degenerate optical parametric amplification is used to deduce the condition demonstrating the Einstein-Podolsky-Rosen (EPR) paradox. The analytics and numerical calculation show the influence of pump depletion on the error in the measurement of continuous variables. The optimum realization of EPR paradox can be achieved by adjusting the parameter of squeezing. This result is of practical importance when the realistic experimental conditions are taken into consideration .
基金Project supported by the National Natural Science Foundation of China(Grant Nos.20925313 and 21503066)the Innovation Program of Chinese Academy of Sciences(Grant No.KJCX2-YW-W25)+1 种基金the Postdoctoral Project of Hebei University,Chinathe Project of Science and Technology Bureau of Baoding City,China(Grant No.15ZG029)
文摘Femtosecond time-resolved fluorescence non-collinear optical parametric amplification spectroscopy (FNOPAS) is a versatile technique with advantages of high sensitivity, broad detection bandwidth, and intrinsic spectrum correction func- tion. These advantages should benefit the study of coherent emission, such as measurement oflasing dynamics. In this letter, the FNOPAS was used to trace the lasing process in Rhodamine 6G (R6G) solution and organic semiconductor nano-wires. High-quality transient emission spectra and lasing dynamic traces were acquired, which demonstrates the applicability of FNOPAS in the study of lasing dynamics. Our work extends the application scope of the FNOPAS technique.
文摘The effect of treatment on patient’s outcome can easily be determined through the impact of the treatment on biological events. Observing the treatment for patients for a certain period of time can help in determining whether there is any change in the biomarker of the patient. It is important to study how the biomarker changes due to treatment and whether for different individuals located in separate centers can be clustered together since they might have different distributions. The study is motivated by a Bayesian non-parametric mixture model, which is more flexible when compared to the Bayesian Parametric models and is capable of borrowing information across different centers allowing them to be grouped together. To this end, this research modeled Biological markers taking into consideration the Surrogate markers. The study employed the nested Dirichlet process prior, which is easily peaceable on different distributions for several centers, with centers from the same Dirichlet process component clustered automatically together. The study sampled from the posterior by use of Markov chain Monte carol algorithm. The model is illustrated using a simulation study to see how it performs on simulated data. Clearly, from the simulation study it was clear that, the model was capable of clustering data into different clusters.
文摘Longitudinal trends of observations can be estimated using the generalized multivariate analysis of variance (GMANOVA) model proposed by [10]. In the present paper, we consider estimating the trends nonparametrically using known basis functions. Then, as in nonparametric regression, an overfitting problem occurs. [13] showed that the GMANOVA model is equivalent to the varying coefficient model with non-longitudinal covariates. Hence, as in the case of the ordinary linear regression model, when the number of covariates becomes large, the estimator of the varying coefficient becomes unstable. In the present paper, we avoid the overfitting problem and the instability problem by applying the concept behind penalized smoothing spline regression and multivariate generalized ridge regression. In addition, we propose two criteria to optimize hyper parameters, namely, a smoothing parameter and ridge parameters. Finally, we compare the ordinary least square estimator and the new estimator.
基金The project supported by National Natural Science Foundation of China
文摘This paper investigates the random responses of a TDOF structure with strongly nonlinear coupling and parametric vibration. With the nonlinear cou- pling of inertia in the equations of motion of the system being removed by successive elimination, the non-Gaussian moment equation method (NGM) is applied and 69 moment equations are integrated with central cumulative truncation technique. The stochastic central difference-cum-statistical linearization method(SCD-SL) and the digital simulation method(DSM) are also used. A comparison of results by different methods are given and the SCD-SL method is the most efficient method.
文摘Modeling dynamic systems with linear parametric models usually suffer limitation which affects forecasting performance and policy implications. This paper advances a non-parametric autoregressive distributed lag model that employs a Bayesian additive regression tree (BART). The performance of the BART model is compared with selection models like Lasso, Elastic Net, and Bayesian networks in simulation experiments with linear and non-linear data generating processes (DGP), and on US macroeconomic time series data. The results show that the BART model is quite competitive against the linear parametric methods when the DGP is linear, and outperforms the competing methods when the DGP is non-linear. The empirical results suggest that the BART estimators are generally more efficient than the traditional linear methods when modeling and forecasting macroeconomic time series.