The Digital Earth concept has attracted much attention recently and this approach uses a variety of earth observation data from the global to the local scale.Imaging techniques have made much progress technically and ...The Digital Earth concept has attracted much attention recently and this approach uses a variety of earth observation data from the global to the local scale.Imaging techniques have made much progress technically and the methods used for automatic extraction of geo-ralated information are of importance in Digital Earth science.One of these methods,artificial neural networks(ANN)techniques,have been effectively used in classification of remotely sensed images.Generally image classification with ANN has been producing higher or equal mapping accuracies than parametric methods.Comparative studies have,in fact,shown that there is no discernible difference in classification accuracies between neural and conventional statistical approaches.Only well designed and trained neural networks can present a better performance than the standard statistical approaches.There are,as yet,no widely recognised standard methods to implement an optimum network.From this point of view it might be beneficial to quantify ANN’s reliability in classification problems.To measure the reliability of the neural network might be a way of developing to determine suitable network structures.To date,the problem of confidence estimation of ANN has not been studied in remote sensing studies.A statistical method for quantifying the reliability of a neural network that can be used in image classification is investigated in this paper.For this purpose the method is to be based on a binomial experimentation concept to establish confidence intervals.This novel method can also be used for the selection of an appropriate network structure for the classification of multispectral imagery.Although the main focus of the research is to estimate confidence in ANN,the approach might also be applicable and relevant to Digital Earth technologies.展开更多
Taking into account the whole system structure and the component reliability estimation uncertainty, a system reliability estimation method based on probability and statistical theory for distributed monitoring system...Taking into account the whole system structure and the component reliability estimation uncertainty, a system reliability estimation method based on probability and statistical theory for distributed monitoring systems is presented. The variance and confidence intervals of the system reliability estimation are obtained by expressing system reliability as a linear sum of products of higher order moments of component reliability estimates when the number of component or system survivals obeys binomial distribution. The eigenfunction of binomial distribution is used to determine the moments of component reliability estimates, and a symbolic matrix which can facilitate the search of explicit system reliability estimates is proposed. Furthermore, a case of application is used to illustrate the procedure, and with the help of this example, various issues such as the applicability of this estimation model, and measures to improve system reliability of monitoring systems are discussed.展开更多
The parameter estimation is considered for the Gompertz distribution under frequensitst and Bayes approaches when records are available.Maximum likelihood estimators,exact and approximate confidence intervals are deve...The parameter estimation is considered for the Gompertz distribution under frequensitst and Bayes approaches when records are available.Maximum likelihood estimators,exact and approximate confidence intervals are developed for the model parameters,and Bayes estimators of reliability performances are obtained under different losses based on a mixture of continuous and discrete priors.To investigate the performance of the proposed estimators,a record simulation algorithm is provided and a numerical study is presented by using Monte-Carlo simulation.展开更多
An improved method using kernel density estimation (KDE) and confidence level is presented for model validation with small samples. Decision making is a challenging problem because of input uncertainty and only smal...An improved method using kernel density estimation (KDE) and confidence level is presented for model validation with small samples. Decision making is a challenging problem because of input uncertainty and only small samples can be used due to the high costs of experimental measurements. However, model validation provides more confidence for decision makers when improving prediction accuracy at the same time. The confidence level method is introduced and the optimum sample variance is determined using a new method in kernel density estimation to increase the credibility of model validation. As a numerical example, the static frame model validation challenge problem presented by Sandia National Laboratories has been chosen. The optimum bandwidth is selected in kernel density estimation in order to build the probability model based on the calibration data. The model assessment is achieved using validation and accreditation experimental data respectively based on the probability model. Finally, the target structure prediction is performed using validated model, which are consistent with the results obtained by other researchers. The results demonstrate that the method using the improved confidence level and kernel density estimation is an effective approach to solve the model validation problem with small samples.展开更多
An important component of a spoken term detection (STD) system involves estimating confidence measures of hypothesised detections.A potential problem of the widely used lattice-based confidence estimation,however,is...An important component of a spoken term detection (STD) system involves estimating confidence measures of hypothesised detections.A potential problem of the widely used lattice-based confidence estimation,however,is that the confidence scores are treated uniformly for all search terms,regardless of how much they may differ in terms of phonetic or linguistic properties.This problem is particularly evident for out-of-vocabulary (OOV) terms which tend to exhibit high intra-term diversity.To address the impact of term diversity on confidence measures,we propose in this work a term-dependent normalisation technique which compensates for term diversity in confidence estimation.We first derive an evaluation-metric-oriented normalisation that optimises the evaluation metric by compensating for the diverse occurrence rates among terms,and then propose a linear bias compensation and a discriminative compensation to deal with the bias problem that is inherent in lattice-based confidence measurement and from which the Term Specific Threshold (TST) approach suffers.We tested the proposed technique on speech data from the multi-party meeting domain with two state-ofthe-art STD systems based on phonemes and words respectively.The experimental results demonstrate that the confidence normalisation approach leads to a significant performance improvement in STD,particularly for OOV terms with phonemebased systems.展开更多
Line transect sampling is a very useful method in survey of wildlife population. Confident interval estimation for density D of a biological population is proposed based on a sequential design. The survey area is occu...Line transect sampling is a very useful method in survey of wildlife population. Confident interval estimation for density D of a biological population is proposed based on a sequential design. The survey area is occupied by the population whose size is unknown. A stopping rule is proposed by a kernel-based estimator of density function of the perpendicular data at a distance. With this stopping rule, we construct several confidence intervals for D by difference procedures. Some bias reduction techniques are used to modify the confidence intervals. These intervals provide the desired coverage probability as the bandwidth in the stopping rule approaches zero. A simulation study is also given to illustrate the performance of this proposed sequential kernel procedure.展开更多
文摘The Digital Earth concept has attracted much attention recently and this approach uses a variety of earth observation data from the global to the local scale.Imaging techniques have made much progress technically and the methods used for automatic extraction of geo-ralated information are of importance in Digital Earth science.One of these methods,artificial neural networks(ANN)techniques,have been effectively used in classification of remotely sensed images.Generally image classification with ANN has been producing higher or equal mapping accuracies than parametric methods.Comparative studies have,in fact,shown that there is no discernible difference in classification accuracies between neural and conventional statistical approaches.Only well designed and trained neural networks can present a better performance than the standard statistical approaches.There are,as yet,no widely recognised standard methods to implement an optimum network.From this point of view it might be beneficial to quantify ANN’s reliability in classification problems.To measure the reliability of the neural network might be a way of developing to determine suitable network structures.To date,the problem of confidence estimation of ANN has not been studied in remote sensing studies.A statistical method for quantifying the reliability of a neural network that can be used in image classification is investigated in this paper.For this purpose the method is to be based on a binomial experimentation concept to establish confidence intervals.This novel method can also be used for the selection of an appropriate network structure for the classification of multispectral imagery.Although the main focus of the research is to estimate confidence in ANN,the approach might also be applicable and relevant to Digital Earth technologies.
基金This project is supported by National Natural Science Foundation of China(No.50335020,No.50205009)Laboratory of Intelligence Manufacturing Technology of Ministry of Education of China(No.J100301).
文摘Taking into account the whole system structure and the component reliability estimation uncertainty, a system reliability estimation method based on probability and statistical theory for distributed monitoring systems is presented. The variance and confidence intervals of the system reliability estimation are obtained by expressing system reliability as a linear sum of products of higher order moments of component reliability estimates when the number of component or system survivals obeys binomial distribution. The eigenfunction of binomial distribution is used to determine the moments of component reliability estimates, and a symbolic matrix which can facilitate the search of explicit system reliability estimates is proposed. Furthermore, a case of application is used to illustrate the procedure, and with the help of this example, various issues such as the applicability of this estimation model, and measures to improve system reliability of monitoring systems are discussed.
基金supported by the National Natural Science Foundation of China(1150143371473187)+1 种基金the Fundamental Research Funds for the Central Universities(JB1507177215591806)
文摘The parameter estimation is considered for the Gompertz distribution under frequensitst and Bayes approaches when records are available.Maximum likelihood estimators,exact and approximate confidence intervals are developed for the model parameters,and Bayes estimators of reliability performances are obtained under different losses based on a mixture of continuous and discrete priors.To investigate the performance of the proposed estimators,a record simulation algorithm is provided and a numerical study is presented by using Monte-Carlo simulation.
基金Funding of Jiangsu Innovation Program for Graduate Education (CXZZ11_0193)NUAA Research Funding (NJ2010009)
文摘An improved method using kernel density estimation (KDE) and confidence level is presented for model validation with small samples. Decision making is a challenging problem because of input uncertainty and only small samples can be used due to the high costs of experimental measurements. However, model validation provides more confidence for decision makers when improving prediction accuracy at the same time. The confidence level method is introduced and the optimum sample variance is determined using a new method in kernel density estimation to increase the credibility of model validation. As a numerical example, the static frame model validation challenge problem presented by Sandia National Laboratories has been chosen. The optimum bandwidth is selected in kernel density estimation in order to build the probability model based on the calibration data. The model assessment is achieved using validation and accreditation experimental data respectively based on the probability model. Finally, the target structure prediction is performed using validated model, which are consistent with the results obtained by other researchers. The results demonstrate that the method using the improved confidence level and kernel density estimation is an effective approach to solve the model validation problem with small samples.
文摘An important component of a spoken term detection (STD) system involves estimating confidence measures of hypothesised detections.A potential problem of the widely used lattice-based confidence estimation,however,is that the confidence scores are treated uniformly for all search terms,regardless of how much they may differ in terms of phonetic or linguistic properties.This problem is particularly evident for out-of-vocabulary (OOV) terms which tend to exhibit high intra-term diversity.To address the impact of term diversity on confidence measures,we propose in this work a term-dependent normalisation technique which compensates for term diversity in confidence estimation.We first derive an evaluation-metric-oriented normalisation that optimises the evaluation metric by compensating for the diverse occurrence rates among terms,and then propose a linear bias compensation and a discriminative compensation to deal with the bias problem that is inherent in lattice-based confidence measurement and from which the Term Specific Threshold (TST) approach suffers.We tested the proposed technique on speech data from the multi-party meeting domain with two state-ofthe-art STD systems based on phonemes and words respectively.The experimental results demonstrate that the confidence normalisation approach leads to a significant performance improvement in STD,particularly for OOV terms with phonemebased systems.
基金Supported by the National Natural Science Funds for Distinguished Young Scholar(No.70825004)the National Natural Science Foundation of China(No.10731010,10628104 and 10721101)Leading Academic Discipline Program,the 10th five year plan of 211 Project for Shanghai University of Finance and Economics and 211 Project for Shanghai University of Finance and Economics(the 3rd phase)
文摘Line transect sampling is a very useful method in survey of wildlife population. Confident interval estimation for density D of a biological population is proposed based on a sequential design. The survey area is occupied by the population whose size is unknown. A stopping rule is proposed by a kernel-based estimator of density function of the perpendicular data at a distance. With this stopping rule, we construct several confidence intervals for D by difference procedures. Some bias reduction techniques are used to modify the confidence intervals. These intervals provide the desired coverage probability as the bandwidth in the stopping rule approaches zero. A simulation study is also given to illustrate the performance of this proposed sequential kernel procedure.