Outlier in one variable will smear the estimation of other measurements in data reconciliation (DR). In this article, a novel robust method is proposed for nonlinear dynamic data reconciliation, to reduce the influe...Outlier in one variable will smear the estimation of other measurements in data reconciliation (DR). In this article, a novel robust method is proposed for nonlinear dynamic data reconciliation, to reduce the influence of outliers on the result of DR. This method introduces a penalty function matrix in a conventional least-square objective function, to assign small weights for outliers and large weights for normal measurements. To avoid the loss of data information, element-wise Mahalanobis distance is proposed, as an improvement on vector-wise distance, to construct a penalty function matrix. The correlation of measurement error is also considered in this article. The method introduces the robust statistical theory into conventional least square estimator by constructing the penalty weight matrix and gets not only good robustness but also simple calculation. Simulation of a continuous stirred tank reactor, verifies the effectiveness of the proposed algorithm.展开更多
The performance of the traditional Voice Activity Detection (VAD) algorithms declines sharply in lower Signal-to-Noise Ratio (SNR) environments. In this paper, a feature weighting likelihood method is proposed for...The performance of the traditional Voice Activity Detection (VAD) algorithms declines sharply in lower Signal-to-Noise Ratio (SNR) environments. In this paper, a feature weighting likelihood method is proposed for noise-robust VAD. The contribution of dynamic features to likelihood score can be increased via the method, which improves consequently the noise robustness of VAD. Divergence based dimension reduction method is proposed for saving computation, which reduces these feature dimensions with smaller divergence value at the cost of degrading the performance a little. Experimental results on Aurora Ⅱ database show that the detection performance in noise environments can remarkably be improved by the proposed method when the model trained in clean data is used to detect speech endpoints. Using weighting likelihood on the dimension-reduced features obtains comparable, even better, performance compared to original full-dimensional feature.展开更多
In this paper,a Maximum Likelihood(ML) approach,implemented by Expectation-Maximization(EM) algorithm,is proposed to blind separation of convolutively mixed discrete sources.In order to carry out the expectation proce...In this paper,a Maximum Likelihood(ML) approach,implemented by Expectation-Maximization(EM) algorithm,is proposed to blind separation of convolutively mixed discrete sources.In order to carry out the expectation procedure of the EM algorithm with a less computational load,the algorithm named Iterative Maximum Likelihood algorithm(IML) is proposed to calculate the likelihood and recover the source signals.An important feature of the ML approach is that it has robust performance in noise environments by treating the covariance matrix of the additive Gaussian noise as a parameter.Another striking feature of the ML approach is that it is possible to separate more sources than sensors by exploiting the finite alphabet property of the sources.Simulation results show that the proposed ML approach works well either in determined mixtures or underdetermined mixtures.Furthermore,the performance of the proposed ML algorithm is close to the performance with perfect knowledge of the channel filters.展开更多
A large sample size is required for Monte Carlo localization (MCL) in multi-robot dynamic environ- ment, because of the "kidnapped robot" phenomenon, which will locate most of the samples in the regions with small...A large sample size is required for Monte Carlo localization (MCL) in multi-robot dynamic environ- ment, because of the "kidnapped robot" phenomenon, which will locate most of the samples in the regions with small value of desired posterior density. For this problem the crossover and mutation operators in evolutionary computation are introduced into MCL to make samples move towards the regions where the desired posterior density is large, so that the sample set can represent the density better. The proposed method is termed genetic Monte Carlo localization (GMCL). Application in robot soccer system shows that GMCL can considerably reduce the required number of samples, and is more precise and robust in dynamic environment.展开更多
We put forward an alternative quantum algorithm for finding ttamiltonian cycles in any N-vertex graph based on adiabatic quantum computing. With a yon Neumann measurement on the final state, one may determine whether ...We put forward an alternative quantum algorithm for finding ttamiltonian cycles in any N-vertex graph based on adiabatic quantum computing. With a yon Neumann measurement on the final state, one may determine whether there is a HamiRonian cycle in the graph and pick out a cycle if there is any. Although the proposed algorithm provides a quadratic speedup, it gives an alternative algorithm based on adiabatic quantum computation, which is of interest because of its inherent robustness.展开更多
This paper addresses the issues of conservativeness and computational complexity of probabilistie robustness analysis. The authors solve both issues by defining a new sampling strategy and robustness measure. The new ...This paper addresses the issues of conservativeness and computational complexity of probabilistie robustness analysis. The authors solve both issues by defining a new sampling strategy and robustness measure. The new measure is shown to be much less conservative than the existing one. The new sampling strategy enables the definition of efficient hierarchical sample reuse algorithms that reduce significantly the computational complexity and make it independent of the dimension of the uncertainty space. Moreover, the authors show that there exists a one to one correspondence between the new and the existing robustness measures and provide a computationally simple algorithm to derive one from the other.展开更多
基金Supported by the National Natural Science Foundation of China (No.60504033)
文摘Outlier in one variable will smear the estimation of other measurements in data reconciliation (DR). In this article, a novel robust method is proposed for nonlinear dynamic data reconciliation, to reduce the influence of outliers on the result of DR. This method introduces a penalty function matrix in a conventional least-square objective function, to assign small weights for outliers and large weights for normal measurements. To avoid the loss of data information, element-wise Mahalanobis distance is proposed, as an improvement on vector-wise distance, to construct a penalty function matrix. The correlation of measurement error is also considered in this article. The method introduces the robust statistical theory into conventional least square estimator by constructing the penalty weight matrix and gets not only good robustness but also simple calculation. Simulation of a continuous stirred tank reactor, verifies the effectiveness of the proposed algorithm.
基金Supported by the National Basic Research Program of China (973 Program) (No.2007CB311104)
文摘The performance of the traditional Voice Activity Detection (VAD) algorithms declines sharply in lower Signal-to-Noise Ratio (SNR) environments. In this paper, a feature weighting likelihood method is proposed for noise-robust VAD. The contribution of dynamic features to likelihood score can be increased via the method, which improves consequently the noise robustness of VAD. Divergence based dimension reduction method is proposed for saving computation, which reduces these feature dimensions with smaller divergence value at the cost of degrading the performance a little. Experimental results on Aurora Ⅱ database show that the detection performance in noise environments can remarkably be improved by the proposed method when the model trained in clean data is used to detect speech endpoints. Using weighting likelihood on the dimension-reduced features obtains comparable, even better, performance compared to original full-dimensional feature.
基金supportedin part by the National Natural Science Foundation of China under Grant No. 61001106the National Key Basic Research Program of China(973 Program) under Grant No. 2009CB320400
文摘In this paper,a Maximum Likelihood(ML) approach,implemented by Expectation-Maximization(EM) algorithm,is proposed to blind separation of convolutively mixed discrete sources.In order to carry out the expectation procedure of the EM algorithm with a less computational load,the algorithm named Iterative Maximum Likelihood algorithm(IML) is proposed to calculate the likelihood and recover the source signals.An important feature of the ML approach is that it has robust performance in noise environments by treating the covariance matrix of the additive Gaussian noise as a parameter.Another striking feature of the ML approach is that it is possible to separate more sources than sensors by exploiting the finite alphabet property of the sources.Simulation results show that the proposed ML approach works well either in determined mixtures or underdetermined mixtures.Furthermore,the performance of the proposed ML algorithm is close to the performance with perfect knowledge of the channel filters.
文摘A large sample size is required for Monte Carlo localization (MCL) in multi-robot dynamic environ- ment, because of the "kidnapped robot" phenomenon, which will locate most of the samples in the regions with small value of desired posterior density. For this problem the crossover and mutation operators in evolutionary computation are introduced into MCL to make samples move towards the regions where the desired posterior density is large, so that the sample set can represent the density better. The proposed method is termed genetic Monte Carlo localization (GMCL). Application in robot soccer system shows that GMCL can considerably reduce the required number of samples, and is more precise and robust in dynamic environment.
文摘We put forward an alternative quantum algorithm for finding ttamiltonian cycles in any N-vertex graph based on adiabatic quantum computing. With a yon Neumann measurement on the final state, one may determine whether there is a HamiRonian cycle in the graph and pick out a cycle if there is any. Although the proposed algorithm provides a quadratic speedup, it gives an alternative algorithm based on adiabatic quantum computation, which is of interest because of its inherent robustness.
基金supported in part by grants from NASA (NCC5-573)LEQSF (NASA /LEQSF(2001-04)-01)+1 种基金the NNSFC Young Investigator Award for Overseas Collaborative Research (60328304)a NNSFC grant (10377004)
文摘This paper addresses the issues of conservativeness and computational complexity of probabilistie robustness analysis. The authors solve both issues by defining a new sampling strategy and robustness measure. The new measure is shown to be much less conservative than the existing one. The new sampling strategy enables the definition of efficient hierarchical sample reuse algorithms that reduce significantly the computational complexity and make it independent of the dimension of the uncertainty space. Moreover, the authors show that there exists a one to one correspondence between the new and the existing robustness measures and provide a computationally simple algorithm to derive one from the other.