According to the principle, “The failure data is the basis of software reliability analysis”, we built a software reliability expert system (SRES) by adopting the artificial intelligence technology. By reasoning out...According to the principle, “The failure data is the basis of software reliability analysis”, we built a software reliability expert system (SRES) by adopting the artificial intelligence technology. By reasoning out the conclusion from the fitting results of failure data of a software project, the SRES can recommend users “the most suitable model” as a software reliability measurement model. We believe that the SRES can overcome the inconsistency in applications of software reliability models well. We report investigation results of singularity and parameter estimation methods of experimental models in SRES.展开更多
Stem diameter distribution information is useful in forest management planning. Weibull function is flexible, and has been used in characterising diameter distributions, especially in single-species planted stands, th...Stem diameter distribution information is useful in forest management planning. Weibull function is flexible, and has been used in characterising diameter distributions, especially in single-species planted stands, the world over. We evaluated some Weibull parameter estimation methods for stem diameter characterisation in (Oban) multi-species Forest in southern Nigeria. Four study sites (Aking, Ekang, Erokut and Ekuri) were selected. Four 2 km-long transects situated at 600 m apart were laid in each location. Five 50m x 50m plots were alternately laid along each transect at 400 m apart (20 plots/location) using systematic sampling technique. Tree growth variables: diameter at breast height (Dbh), diameters at the base, middle and merchantable limit, total height, merchantable height, stem straightness, crown length and crown diameter were measured on all trees 〉 10 cm to compute model response variables such as mean diameters, basal area and stem volume. Weibull parameters estimation methods used were: moment-based, percentile-based, hybrid and maximum-likelihood (ML). Data were analysed using descriptive statistics, regression models and ANOVA at α0.05. Percentile-based method was the best for Weibull [location (a), scale (b) and shape (c)] parameters estimations with mLogL = 116.66±21.89, while hybrid method was least-suitable (mLogL = 690.14±128.81) for Weibull parameters estimations. Quadratic mean diameter (Dq) was the only suitable predictor of Weibull parameters in Oban Forest.展开更多
Source term identification is very important for the contaminant gas emission event. Thus, it is necessary to study the source parameter estimation method with high computation efficiency, high estimation accuracy and...Source term identification is very important for the contaminant gas emission event. Thus, it is necessary to study the source parameter estimation method with high computation efficiency, high estimation accuracy and reasonable confidence interval. Tikhonov regularization method is a potential good tool to identify the source parameters. However, it is invalid for nonlinear inverse problem like gas emission process. 2-step nonlinear and linear PSO (partial swarm optimization)-Tikhonov regularization method proposed previously have estimated the emission source parameters successfully. But there are still some problems in computation efficiency and confidence interval. Hence, a new 1-step nonlinear method combined Tikhonov regularizafion and PSO algorithm with nonlinear forward dispersion model was proposed. First, the method was tested with simulation and experiment cases. The test results showed that 1-step nonlinear hybrid method is able to estimate multiple source parameters with reasonable confidence interval. Then, the estimation performances of different methods were compared with different cases. The estimation values with 1-step nonlinear method were close to that with 2-step nonlinear and linear PSO-Tikhonov regularization method, 1-step nonlinear method even performs better than other two methods in some cases, especially for source strength and downwind distance estimation. Compared with 2-step nonlinear method, 1-step method has higher computation efficiency. On the other hand, the confidence intervals with the method proposed in this paper seem more reasonable than that with other two methods. Finally, single PSO algorithm was compared with 1-step nonlinear PSO-Tikhonov hybrid regularization method. The results showed that the skill scores of 1-step nonlinear hybrid method to estimate source parameters were close to that of single PSO method and even better in some cases. One more important property of 1-step nonlinear PSO-Tikhonov regularization method is its reasonable confidence interval, which is not obtained by single PSO algorithm. Therefore, 1-step nonlinear hybrid regularization method proposed in this paper is a potential good method to estimate contaminant gas emission source term.展开更多
Two aspects of a new method,which can be used for seismic zoning,are introduced in this paper.On the one hand,the approach to estimate b value and annual activity rate proposed by Kijko and Sellevoll needs to use the ...Two aspects of a new method,which can be used for seismic zoning,are introduced in this paper.On the one hand,the approach to estimate b value and annual activity rate proposed by Kijko and Sellevoll needs to use the earthquake catalogue.The existing earthquake catalogue contains both historical and recent instrumental data sets and it is inadequate to use only one part.Combining the large number of historical events with recent complete records and taking the magnitude uncertainty into account,Kijko’s method gives the maximum likelihood estimation of b value and annual activity rate,which might be more realistic.On the other hand,this method considers the source zone boundary uncertainty in seismic hazard analysis,which means the earthquake activity rate across a boundary of a source zone changes smoothly instead of abruptly and avoids too large a gradient in the calculated results.展开更多
A new method for array calibration of array gain and phase uncertainties, which severely degrade the performance of spatial spectrum estimation, is presented. The method is based on the idea of the instrumental sensor...A new method for array calibration of array gain and phase uncertainties, which severely degrade the performance of spatial spectrum estimation, is presented. The method is based on the idea of the instrumental sensors method (ISM), two well-calibrated sensors are added into the original array. By applying the principle of estimation of signal parameters via rotational invariance techniques (ESPRIT), the direction-of-arrivals (DOAs) and uncertainties can be estimated simultaneously through eigen-decomposition. Compared with the conventional ones, this new method has less computational complexity while has higher estimation precision, what's more, it can overcome the problem of ambiguity. Both theoretical analysis and computer simulations show the effectiveness of the proposed method.展开更多
Commonly used statistical procedure to describe the observed statistical sets is to use their conventional moments or cumulants. When choosing an appropriate parametric distribution for the data set is typically that ...Commonly used statistical procedure to describe the observed statistical sets is to use their conventional moments or cumulants. When choosing an appropriate parametric distribution for the data set is typically that parameters of a parametric distribution are estimated using the moment method of creating a system of equations in which the sample conventional moments lay in the equality of the corresponding moments of the theoretical distribution. However, the moment method of parameter estimation is not always convenient, especially for small samples. An alternative approach is based on the use of other characteristics, which the author calls L-moments. L-moments are analogous to conventional moments, but they are based on linear combinations of order statistics, i.e., L-statistics. Using L-moments is theoretically preferable to the conventional moments and consists in the fact that L-moments characterize a wider range of distribution. When estimating from sample L-moments, L-moments are more robust to the presence of outliers in the data. Experience also shows that, compared to conventional moments, L-moments are less prone to bias of estimation. Parameter estimates obtained using L-moments are mainly in the case of small samples often even more accurate than estimates of parameters made by maximum likelihood method. Using the method of L-moments in the case of small data sets from the meteorology is primarily known in statistical literature. This paper deals with the use of L-moments in the case for large data sets of income distribution (individual data) and wage distribution (data are ordered to form of interval frequency distribution of extreme open intervals). This paper also presents a comparison of the accuracy of the method of L-moments with an accuracy of other methods of point estimation of parameters of parametric probability distribution in the case of large data sets of individual data and data ordered to form of interval frequency distribution.展开更多
A new and useful method of technology economics, parameter estimation method, was presented in light of the stability of gravity center of object in this paper. This method could deal with the fitting and forecasting ...A new and useful method of technology economics, parameter estimation method, was presented in light of the stability of gravity center of object in this paper. This method could deal with the fitting and forecasting of economy volume and could greatly decrease the errors of the fitting and forecasting results. Moreover, the strict hypothetical conditions in least squares method were not necessary in the method presented in this paper, which overcame the shortcomings of least squares method and expanded the application of data barycentre method. Application to the steel consumption volume forecasting was presented in this paper. It was shown that the result of fitting and forecasting was satisfactory. From the comparison between data barycentre forecasting method and least squares method, we could conclude that the fitting and forecasting results using data barycentre method were more stable than those of using least squares regression forecasting method, and the computation of data barycentre forecasting method was simpler than that of least squares method. As a result, the data barycentre method was convenient to use in technical economy.展开更多
A new hierarchical parameter estimation method for doubly fed induction generator (DFIG) and drive train system in a wind turbine generator (WTG) is proposed in this paper. Firstly, the parameters of the DFIG and ...A new hierarchical parameter estimation method for doubly fed induction generator (DFIG) and drive train system in a wind turbine generator (WTG) is proposed in this paper. Firstly, the parameters of the DFIG and the drive train are estimated locally under different types of disturbances. Secondly, a coordination estimation method is further applied to identify the parameters of the DFIG and the drive train simultaneously with the purpose of attaining the global optimal estimation results. The main benefit of the proposed scheme is the improved estimation accuracy. Estimation results confirm the applicability of the proposed estimation technique.展开更多
According to the principle, “The failure data is the basis of software reliabilityanalysis”, we built a software reliability expert system (SRES) by adopting the artificialtechnology. By reasoning out the conclusion...According to the principle, “The failure data is the basis of software reliabilityanalysis”, we built a software reliability expert system (SRES) by adopting the artificialtechnology. By reasoning out the conclusion from the fitting results of failure data of asoftware project, the SRES can recommend users “the most suitable model” as a softwarereliability measurement model. We believe that the SRES can overcome the inconsistency inapplications of software reliability models well. We report investigation results of singularity and parameter estimation methods of models, LVLM and LVQM.展开更多
Objective:A computational model of insulin secretion and glucose metabolism for assisting the diagnosis of diabetes mellitus in clinical research is introduced.The proposed method for the estimation of parameters for...Objective:A computational model of insulin secretion and glucose metabolism for assisting the diagnosis of diabetes mellitus in clinical research is introduced.The proposed method for the estimation of parameters for a system of ordinary differential equations(ODEs)that represent the time course of plasma glucose and insulin concentrations during glucose tolerance test(GTT)in physiological studies is presented.The aim of this study was to explore how to interpret those laboratory glucose and insulin data as well as enhance the Ackerman mathematical model.Methods:Parameters estimation for a system of ODEs was performed by minimizing the sum of squared residuals(SSR)function,which quantifies the difference between theoretical model predictions and GTT's experimental observations.Our proposed perturbation search and multiple-shooting methods were applied during the estimating process.Results:Based on the Ackerman's published data,we estimated the key parameters by applying R-based iterative computer programs.As a result,the theoretically simulated curves perfectly matched the experimental data points.Our model showed that the estimated parameters,computed frequency and period values,were proven a good indicator of diabetes.Conclusion:The present paper introduces a computational algorithm to biomedical problems,particularly to endocrinology and metabolism fields,which involves two coupled differential equations with four parameters describing the glucose-insulin regulatory system that Ackerman proposed earlier.The enhanced approach may provide clinicians in endocrinology and metabolism field insight into the transition nature of human metabolic mechanism from normal to impaired glucose tolerance.展开更多
This paper deals with the use of Pareto distribution in models of wage distribution. Pareto distribution cannot generally be used as a model of the whole wage distribution, but only as a model for the distribution of ...This paper deals with the use of Pareto distribution in models of wage distribution. Pareto distribution cannot generally be used as a model of the whole wage distribution, but only as a model for the distribution of higher or of the highest wages. It is usually about wages higher than the median. The parameter b is called the Pareto coefficient and it is often used as a characteristic of differentiation of fifty percent of the highest wages. Pareto distribution is so much the more applicable model of a specific wage distribution, the more specific differentiation of fifty percent of the highest wages will resemble to differentiation that is expected by Pareto distribution. Pareto distribution assumes a differentiation of wages, in which the following ratios are the same: ratio of the upper quartile to the median; ratio of the eighth decile to the sixth decile; ratio of the ninth decile to the eighth decile. This finding may serve as one of the empirical criterions for assessing, whether Pareto distribution is a suitable or less suitable model of a particular wage distribution. If we find only small differences between the ratios of these quantiles in a specific wage distribution, Pareto distribution is a good model of a specific wage distribution. Approximation of a specific wage distribution by Pareto distribution will be less suitable or even unsuitable when more expressive differences of mentioned ratios. If we choose Pareto distribution as a model of a specific wage distribution, we must reckon with the fact that the model is always only an approximation. It will describe only approximately the actual wage distribution and the relationships in the model will only partially reflect the relationships in a specific wage distribution.展开更多
In Bayesian multi-target fltering,knowledge of measurement noise variance is very important.Signifcant mismatches in noise parameters will result in biased estimates.In this paper,a new particle flter for a probabilit...In Bayesian multi-target fltering,knowledge of measurement noise variance is very important.Signifcant mismatches in noise parameters will result in biased estimates.In this paper,a new particle flter for a probability hypothesis density(PHD)flter handling unknown measurement noise variances is proposed.The approach is based on marginalizing the unknown parameters out of the posterior distribution by using variational Bayesian(VB)methods.Moreover,the sequential Monte Carlo method is used to approximate the posterior intensity considering non-linear and non-Gaussian conditions.Unlike other particle flters for this challenging class of PHD flters,the proposed method can adaptively learn the unknown and time-varying noise variances while fltering.Simulation results show that the proposed method improves estimation accuracy in terms of both the number of targets and their states.展开更多
Estimating cross-range velocity is a challenging task for space-borne synthetic aperture radar(SAR), which is important for ground moving target indication(GMTI). Because the velocity of a target is very small com...Estimating cross-range velocity is a challenging task for space-borne synthetic aperture radar(SAR), which is important for ground moving target indication(GMTI). Because the velocity of a target is very small compared with that of the satellite, it is difficult to correctly estimate it using a conventional monostatic platform algorithm. To overcome this problem, a novel method employing multistatic SAR is presented in this letter. The proposed hybrid method, which is based on an extended space-time model(ESTIM) of the azimuth signal, has two steps: first, a set of finite impulse response(FIR) filter banks based on a fractional Fourier transform(FrFT) is used to separate multiple targets within a range gate; second, a cross-correlation spectrum weighted subspace fitting(CSWSF) algorithm is applied to each of the separated signals in order to estimate their respective parameters. As verified through computer simulation with the constellations of Cartwheel, Pendulum and Helix, this proposed time-frequency-subspace method effectively improves the estimation precision of the cross-range velocities of multiple targets.展开更多
基金the National Natural Science Foundation of China
文摘According to the principle, “The failure data is the basis of software reliability analysis”, we built a software reliability expert system (SRES) by adopting the artificial intelligence technology. By reasoning out the conclusion from the fitting results of failure data of a software project, the SRES can recommend users “the most suitable model” as a software reliability measurement model. We believe that the SRES can overcome the inconsistency in applications of software reliability models well. We report investigation results of singularity and parameter estimation methods of experimental models in SRES.
文摘Stem diameter distribution information is useful in forest management planning. Weibull function is flexible, and has been used in characterising diameter distributions, especially in single-species planted stands, the world over. We evaluated some Weibull parameter estimation methods for stem diameter characterisation in (Oban) multi-species Forest in southern Nigeria. Four study sites (Aking, Ekang, Erokut and Ekuri) were selected. Four 2 km-long transects situated at 600 m apart were laid in each location. Five 50m x 50m plots were alternately laid along each transect at 400 m apart (20 plots/location) using systematic sampling technique. Tree growth variables: diameter at breast height (Dbh), diameters at the base, middle and merchantable limit, total height, merchantable height, stem straightness, crown length and crown diameter were measured on all trees 〉 10 cm to compute model response variables such as mean diameters, basal area and stem volume. Weibull parameters estimation methods used were: moment-based, percentile-based, hybrid and maximum-likelihood (ML). Data were analysed using descriptive statistics, regression models and ANOVA at α0.05. Percentile-based method was the best for Weibull [location (a), scale (b) and shape (c)] parameters estimations with mLogL = 116.66±21.89, while hybrid method was least-suitable (mLogL = 690.14±128.81) for Weibull parameters estimations. Quadratic mean diameter (Dq) was the only suitable predictor of Weibull parameters in Oban Forest.
基金Supported by the National Natural Science Foundation of China(21676216)China Postdoctoral Science Foundation(2015M582667)+2 种基金Natural Science Basic Research Plan in Shaanxi Province of China(2016JQ5079)Key Research Project of Shaanxi Province(2015ZDXM-GY-115)the Fundamental Research Funds for the Central Universities(xjj2017124)
文摘Source term identification is very important for the contaminant gas emission event. Thus, it is necessary to study the source parameter estimation method with high computation efficiency, high estimation accuracy and reasonable confidence interval. Tikhonov regularization method is a potential good tool to identify the source parameters. However, it is invalid for nonlinear inverse problem like gas emission process. 2-step nonlinear and linear PSO (partial swarm optimization)-Tikhonov regularization method proposed previously have estimated the emission source parameters successfully. But there are still some problems in computation efficiency and confidence interval. Hence, a new 1-step nonlinear method combined Tikhonov regularizafion and PSO algorithm with nonlinear forward dispersion model was proposed. First, the method was tested with simulation and experiment cases. The test results showed that 1-step nonlinear hybrid method is able to estimate multiple source parameters with reasonable confidence interval. Then, the estimation performances of different methods were compared with different cases. The estimation values with 1-step nonlinear method were close to that with 2-step nonlinear and linear PSO-Tikhonov regularization method, 1-step nonlinear method even performs better than other two methods in some cases, especially for source strength and downwind distance estimation. Compared with 2-step nonlinear method, 1-step method has higher computation efficiency. On the other hand, the confidence intervals with the method proposed in this paper seem more reasonable than that with other two methods. Finally, single PSO algorithm was compared with 1-step nonlinear PSO-Tikhonov hybrid regularization method. The results showed that the skill scores of 1-step nonlinear hybrid method to estimate source parameters were close to that of single PSO method and even better in some cases. One more important property of 1-step nonlinear PSO-Tikhonov regularization method is its reasonable confidence interval, which is not obtained by single PSO algorithm. Therefore, 1-step nonlinear hybrid regularization method proposed in this paper is a potential good method to estimate contaminant gas emission source term.
基金This project was sponsored by the State Seismological Bureau (85070102), China
文摘Two aspects of a new method,which can be used for seismic zoning,are introduced in this paper.On the one hand,the approach to estimate b value and annual activity rate proposed by Kijko and Sellevoll needs to use the earthquake catalogue.The existing earthquake catalogue contains both historical and recent instrumental data sets and it is inadequate to use only one part.Combining the large number of historical events with recent complete records and taking the magnitude uncertainty into account,Kijko’s method gives the maximum likelihood estimation of b value and annual activity rate,which might be more realistic.On the other hand,this method considers the source zone boundary uncertainty in seismic hazard analysis,which means the earthquake activity rate across a boundary of a source zone changes smoothly instead of abruptly and avoids too large a gradient in the calculated results.
文摘A new method for array calibration of array gain and phase uncertainties, which severely degrade the performance of spatial spectrum estimation, is presented. The method is based on the idea of the instrumental sensors method (ISM), two well-calibrated sensors are added into the original array. By applying the principle of estimation of signal parameters via rotational invariance techniques (ESPRIT), the direction-of-arrivals (DOAs) and uncertainties can be estimated simultaneously through eigen-decomposition. Compared with the conventional ones, this new method has less computational complexity while has higher estimation precision, what's more, it can overcome the problem of ambiguity. Both theoretical analysis and computer simulations show the effectiveness of the proposed method.
文摘Commonly used statistical procedure to describe the observed statistical sets is to use their conventional moments or cumulants. When choosing an appropriate parametric distribution for the data set is typically that parameters of a parametric distribution are estimated using the moment method of creating a system of equations in which the sample conventional moments lay in the equality of the corresponding moments of the theoretical distribution. However, the moment method of parameter estimation is not always convenient, especially for small samples. An alternative approach is based on the use of other characteristics, which the author calls L-moments. L-moments are analogous to conventional moments, but they are based on linear combinations of order statistics, i.e., L-statistics. Using L-moments is theoretically preferable to the conventional moments and consists in the fact that L-moments characterize a wider range of distribution. When estimating from sample L-moments, L-moments are more robust to the presence of outliers in the data. Experience also shows that, compared to conventional moments, L-moments are less prone to bias of estimation. Parameter estimates obtained using L-moments are mainly in the case of small samples often even more accurate than estimates of parameters made by maximum likelihood method. Using the method of L-moments in the case of small data sets from the meteorology is primarily known in statistical literature. This paper deals with the use of L-moments in the case for large data sets of income distribution (individual data) and wage distribution (data are ordered to form of interval frequency distribution of extreme open intervals). This paper also presents a comparison of the accuracy of the method of L-moments with an accuracy of other methods of point estimation of parameters of parametric probability distribution in the case of large data sets of individual data and data ordered to form of interval frequency distribution.
文摘A new and useful method of technology economics, parameter estimation method, was presented in light of the stability of gravity center of object in this paper. This method could deal with the fitting and forecasting of economy volume and could greatly decrease the errors of the fitting and forecasting results. Moreover, the strict hypothetical conditions in least squares method were not necessary in the method presented in this paper, which overcame the shortcomings of least squares method and expanded the application of data barycentre method. Application to the steel consumption volume forecasting was presented in this paper. It was shown that the result of fitting and forecasting was satisfactory. From the comparison between data barycentre forecasting method and least squares method, we could conclude that the fitting and forecasting results using data barycentre method were more stable than those of using least squares regression forecasting method, and the computation of data barycentre forecasting method was simpler than that of least squares method. As a result, the data barycentre method was convenient to use in technical economy.
基金This research was supported by the National Natural Science Foundation of China (Major Program) (Grant Nos. 51190102 and 51207045).
文摘A new hierarchical parameter estimation method for doubly fed induction generator (DFIG) and drive train system in a wind turbine generator (WTG) is proposed in this paper. Firstly, the parameters of the DFIG and the drive train are estimated locally under different types of disturbances. Secondly, a coordination estimation method is further applied to identify the parameters of the DFIG and the drive train simultaneously with the purpose of attaining the global optimal estimation results. The main benefit of the proposed scheme is the improved estimation accuracy. Estimation results confirm the applicability of the proposed estimation technique.
基金Supported by the National Natural Science Foundation of China
文摘According to the principle, “The failure data is the basis of software reliabilityanalysis”, we built a software reliability expert system (SRES) by adopting the artificialtechnology. By reasoning out the conclusion from the fitting results of failure data of asoftware project, the SRES can recommend users “the most suitable model” as a softwarereliability measurement model. We believe that the SRES can overcome the inconsistency inapplications of software reliability models well. We report investigation results of singularity and parameter estimation methods of models, LVLM and LVQM.
基金supported by a grant from the NIH(No.U42 RR16607)
文摘Objective:A computational model of insulin secretion and glucose metabolism for assisting the diagnosis of diabetes mellitus in clinical research is introduced.The proposed method for the estimation of parameters for a system of ordinary differential equations(ODEs)that represent the time course of plasma glucose and insulin concentrations during glucose tolerance test(GTT)in physiological studies is presented.The aim of this study was to explore how to interpret those laboratory glucose and insulin data as well as enhance the Ackerman mathematical model.Methods:Parameters estimation for a system of ODEs was performed by minimizing the sum of squared residuals(SSR)function,which quantifies the difference between theoretical model predictions and GTT's experimental observations.Our proposed perturbation search and multiple-shooting methods were applied during the estimating process.Results:Based on the Ackerman's published data,we estimated the key parameters by applying R-based iterative computer programs.As a result,the theoretically simulated curves perfectly matched the experimental data points.Our model showed that the estimated parameters,computed frequency and period values,were proven a good indicator of diabetes.Conclusion:The present paper introduces a computational algorithm to biomedical problems,particularly to endocrinology and metabolism fields,which involves two coupled differential equations with four parameters describing the glucose-insulin regulatory system that Ackerman proposed earlier.The enhanced approach may provide clinicians in endocrinology and metabolism field insight into the transition nature of human metabolic mechanism from normal to impaired glucose tolerance.
文摘This paper deals with the use of Pareto distribution in models of wage distribution. Pareto distribution cannot generally be used as a model of the whole wage distribution, but only as a model for the distribution of higher or of the highest wages. It is usually about wages higher than the median. The parameter b is called the Pareto coefficient and it is often used as a characteristic of differentiation of fifty percent of the highest wages. Pareto distribution is so much the more applicable model of a specific wage distribution, the more specific differentiation of fifty percent of the highest wages will resemble to differentiation that is expected by Pareto distribution. Pareto distribution assumes a differentiation of wages, in which the following ratios are the same: ratio of the upper quartile to the median; ratio of the eighth decile to the sixth decile; ratio of the ninth decile to the eighth decile. This finding may serve as one of the empirical criterions for assessing, whether Pareto distribution is a suitable or less suitable model of a particular wage distribution. If we find only small differences between the ratios of these quantiles in a specific wage distribution, Pareto distribution is a good model of a specific wage distribution. Approximation of a specific wage distribution by Pareto distribution will be less suitable or even unsuitable when more expressive differences of mentioned ratios. If we choose Pareto distribution as a model of a specific wage distribution, we must reckon with the fact that the model is always only an approximation. It will describe only approximately the actual wage distribution and the relationships in the model will only partially reflect the relationships in a specific wage distribution.
基金supported by National High-tech Research and Development Program of China (No.2011AA7014061)
文摘In Bayesian multi-target fltering,knowledge of measurement noise variance is very important.Signifcant mismatches in noise parameters will result in biased estimates.In this paper,a new particle flter for a probability hypothesis density(PHD)flter handling unknown measurement noise variances is proposed.The approach is based on marginalizing the unknown parameters out of the posterior distribution by using variational Bayesian(VB)methods.Moreover,the sequential Monte Carlo method is used to approximate the posterior intensity considering non-linear and non-Gaussian conditions.Unlike other particle flters for this challenging class of PHD flters,the proposed method can adaptively learn the unknown and time-varying noise variances while fltering.Simulation results show that the proposed method improves estimation accuracy in terms of both the number of targets and their states.
基金supported by the National Natural Science Foundation of China (No. 61271343)the Research Fund for the Doctoral Program of Higher Education of China (No. 20122302110012)the 2014 Innovation of Science and Technology Program of China Aerospace Science and Technology Corporation
文摘Estimating cross-range velocity is a challenging task for space-borne synthetic aperture radar(SAR), which is important for ground moving target indication(GMTI). Because the velocity of a target is very small compared with that of the satellite, it is difficult to correctly estimate it using a conventional monostatic platform algorithm. To overcome this problem, a novel method employing multistatic SAR is presented in this letter. The proposed hybrid method, which is based on an extended space-time model(ESTIM) of the azimuth signal, has two steps: first, a set of finite impulse response(FIR) filter banks based on a fractional Fourier transform(FrFT) is used to separate multiple targets within a range gate; second, a cross-correlation spectrum weighted subspace fitting(CSWSF) algorithm is applied to each of the separated signals in order to estimate their respective parameters. As verified through computer simulation with the constellations of Cartwheel, Pendulum and Helix, this proposed time-frequency-subspace method effectively improves the estimation precision of the cross-range velocities of multiple targets.