Monitoring sensors in complex engineering environments often record abnormal data,leading to significant positioning errors.To reduce the influence of abnormal arrival times,we introduce an innovative,outlier-robust l...Monitoring sensors in complex engineering environments often record abnormal data,leading to significant positioning errors.To reduce the influence of abnormal arrival times,we introduce an innovative,outlier-robust localization method that integrates kernel density estimation(KDE)with damping linear correction to enhance the precision of microseismic/acoustic emission(MS/AE)source positioning.Our approach systematically addresses abnormal arrival times through a three-step process:initial location by 4-arrival combinations,elimination of outliers based on three-dimensional KDE,and refinement using a linear correction with an adaptive damping factor.We validate our method through lead-breaking experiments,demonstrating over a 23%improvement in positioning accuracy with a maximum error of 9.12 mm(relative error of 15.80%)—outperforming 4 existing methods.Simulations under various system errors,outlier scales,and ratios substantiate our method’s superior performance.Field blasting experiments also confirm the practical applicability,with an average positioning error of 11.71 m(relative error of 7.59%),compared to 23.56,66.09,16.95,and 28.52 m for other methods.This research is significant as it enhances the robustness of MS/AE source localization when confronted with data anomalies.It also provides a practical solution for real-world engineering and safety monitoring applications.展开更多
In real-world applications, datasets frequently contain outliers, which can hinder the generalization ability of machine learning models. Bayesian classifiers, a popular supervised learning method, rely on accurate pr...In real-world applications, datasets frequently contain outliers, which can hinder the generalization ability of machine learning models. Bayesian classifiers, a popular supervised learning method, rely on accurate probability density estimation for classifying continuous datasets. However, achieving precise density estimation with datasets containing outliers poses a significant challenge. This paper introduces a Bayesian classifier that utilizes optimized robust kernel density estimation to address this issue. Our proposed method enhances the accuracy of probability density distribution estimation by mitigating the impact of outliers on the training sample’s estimated distribution. Unlike the conventional kernel density estimator, our robust estimator can be seen as a weighted kernel mapping summary for each sample. This kernel mapping performs the inner product in the Hilbert space, allowing the kernel density estimation to be considered the average of the samples’ mapping in the Hilbert space using a reproducing kernel. M-estimation techniques are used to obtain accurate mean values and solve the weights. Meanwhile, complete cross-validation is used as the objective function to search for the optimal bandwidth, which impacts the estimator. The Harris Hawks Optimisation optimizes the objective function to improve the estimation accuracy. The experimental results show that it outperforms other optimization algorithms regarding convergence speed and objective function value during the bandwidth search. The optimal robust kernel density estimator achieves better fitness performance than the traditional kernel density estimator when the training data contains outliers. The Naïve Bayesian with optimal robust kernel density estimation improves the generalization in the classification with outliers.展开更多
Controlled experiments are widely used in many applications to investigate the causal relationship between input factors and experimental outcomes.A completely randomised design is usually used to randomly assign trea...Controlled experiments are widely used in many applications to investigate the causal relationship between input factors and experimental outcomes.A completely randomised design is usually used to randomly assign treatment levels to experimental units.When covariates of the experimental units are available,the experimental design should achieve covariate balancing among the treatment groups,such that the statistical inference of the treatment effects is not confounded with any possible effects of covariates.However,covariate imbalance often exists,because the experiment is carried out based on a single realisation of the complete randomisation.It is more likely to occur and worsen when the size of the experimental units is small or moderate.In this paper,we introduce a new covariate balancing criterion,which measures the differences between kernel density estimates of the covariates of treatment groups.To achieve covariate balance before the treatments are randomly assigned,we partition the experimental units by minimising the criterion,then randomly assign the treatment levels to the partitioned groups.Through numerical examples,weshow that the proposed partition approach can improve the accuracy of the difference-in-mean estimator and outperforms the complete randomisation and rerandomisation approaches.展开更多
Let {Xn, n≥1} be a strictly stationary sequence of random variables, which are either associated or negatively associated, f(.) be their common density. In this paper, the author shows a central limit theorem for a k...Let {Xn, n≥1} be a strictly stationary sequence of random variables, which are either associated or negatively associated, f(.) be their common density. In this paper, the author shows a central limit theorem for a kernel estimate of f(.) under certain regular conditions.展开更多
Road network is a critical component of public infrastructure,and the supporting system of social and economic development.Based on a modified kernel density estimate(KDE)algorithm,this study evaluated the road servic...Road network is a critical component of public infrastructure,and the supporting system of social and economic development.Based on a modified kernel density estimate(KDE)algorithm,this study evaluated the road service capacity provided by a road network composed of multi-level roads(i.e.national,provincial,county and rural roads),by taking account of the differences of effect extent and intensity for roads of different levels.Summarized at town scale,the population burden and the annual rural economic income of unit road service capacity were used as the surrogates of social and economic demands for road service.This method was applied to the road network of the Three Parallel River Region,the northwestern Yunnan Province,China to evaluate the development of road network in this region.In results,the total road length of this region in 2005 was 3.70×104km,and the length ratio between national,provincial,county and rural roads was 1∶2∶8∶47.From 1989 to 2005,the regional road service capacity increased by 13.1%,of which the contributions from the national,provincial,county and rural roads were 11.1%,19.4%,22.6%,and 67.8%,respectively,revealing the effect of′All Village Accessible′policy of road development in the mountainous regions in the last decade.The spatial patterns of population burden and economic requirement of unit road service suggested that the areas farther away from the national and provincial roads have higher road development priority(RDP).Based on the modified KDE model and the framework of RDP evaluation,this study provided a useful approach for developing an optimal plan of road development at regional scale.展开更多
In the process of large-scale,grid-connected wind power operations,it is important to establish an accurate probability distribution model for wind farm fluctuations.In this study,a wind power fluctuation modeling met...In the process of large-scale,grid-connected wind power operations,it is important to establish an accurate probability distribution model for wind farm fluctuations.In this study,a wind power fluctuation modeling method is proposed based on the method of moving average and adaptive nonparametric kernel density estimation(NPKDE)method.Firstly,the method of moving average is used to reduce the fluctuation of the sampling wind power component,and the probability characteristics of the modeling are then determined based on the NPKDE.Secondly,the model is improved adaptively,and is then solved by using constraint-order optimization.The simulation results show that this method has a better accuracy and applicability compared with the modeling method based on traditional parameter estimation,and solves the local adaptation problem of traditional NPKDE.展开更多
In order to improve the performance of the probability hypothesis density(PHD) algorithm based particle filter(PF) in terms of number estimation and states extraction of multiple targets, a new probability hypothesis ...In order to improve the performance of the probability hypothesis density(PHD) algorithm based particle filter(PF) in terms of number estimation and states extraction of multiple targets, a new probability hypothesis density filter algorithm based on marginalized particle and kernel density estimation is proposed, which utilizes the idea of marginalized particle filter to enhance the estimating performance of the PHD. The state variables are decomposed into linear and non-linear parts. The particle filter is adopted to predict and estimate the nonlinear states of multi-target after dimensionality reduction, while the Kalman filter is applied to estimate the linear parts under linear Gaussian condition. Embedding the information of the linear states into the estimated nonlinear states helps to reduce the estimating variance and improve the accuracy of target number estimation. The meanshift kernel density estimation, being of the inherent nature of searching peak value via an adaptive gradient ascent iteration, is introduced to cluster particles and extract target states, which is independent of the target number and can converge to the local peak position of the PHD distribution while avoiding the errors due to the inaccuracy in modeling and parameters estimation. Experiments show that the proposed algorithm can obtain higher tracking accuracy when using fewer sampling particles and is of lower computational complexity compared with the PF-PHD.展开更多
In this paper, we propose a new method that combines collage error in fractal domain and Hu moment invariants for image retrieval with a statistical method - variable bandwidth Kernel Density Estimation (KDE). The pro...In this paper, we propose a new method that combines collage error in fractal domain and Hu moment invariants for image retrieval with a statistical method - variable bandwidth Kernel Density Estimation (KDE). The proposed method is called CHK (KDE of Collage error and Hu moment) and it is tested on the Vistex texture database with 640 natural images. Experimental results show that the Average Retrieval Rate (ARR) can reach into 78.18%, which demonstrates that the proposed method performs better than the one with parameters respectively as well as the commonly used histogram method both on retrieval rate and retrieval time.展开更多
A novel diversity-sampling based nonparametric multi-modal background model is proposed. Using the samples having more popular and various intensity values in the training sequence, a nonparametric model is built for ...A novel diversity-sampling based nonparametric multi-modal background model is proposed. Using the samples having more popular and various intensity values in the training sequence, a nonparametric model is built for background subtraction. According to the related intensifies, different weights are given to the distinct samples in kernel density estimation. This avoids repeated computation using all samples, and makes computation more efficient in the evaluation phase. Experimental results show the validity of the diversity- sampling scheme and robustness of the proposed model in moving objects segmentation. The proposed algorithm can be used in outdoor surveillance systems.展开更多
A kernel density estimator is proposed when tile data are subject to censorship in multivariate case. The asymptotic normality, strong convergence and asymptotic optimal bandwidth which minimize the mean square error ...A kernel density estimator is proposed when tile data are subject to censorship in multivariate case. The asymptotic normality, strong convergence and asymptotic optimal bandwidth which minimize the mean square error of the estimator are studied.展开更多
Beijing Xianyukou Hutong(hutong refers to historical and cultural block in Chinese)occupies an important geographical location with unique urban fabric,and after years of renewal and protection,the commercial space of...Beijing Xianyukou Hutong(hutong refers to historical and cultural block in Chinese)occupies an important geographical location with unique urban fabric,and after years of renewal and protection,the commercial space of Xianyukou Street and has gained some recognition.This article Xianyukou takes commercial hutong in Beijing as an example,spatial analysis was carried out using methods like GIS kernel density method,space syntax after site investigation and research.Based on the street space problems found,this paper then puts forward strategies to improve and upgrade Xianyukou Street’s commercial space and improve businesses in Xianyukou Street and other similar hutong.展开更多
One-class support vector machine (OCSVM) and support vector data description (SVDD) are two main domain-based one-class (kernel) classifiers. To reveal their relationship with density estimation in the case of t...One-class support vector machine (OCSVM) and support vector data description (SVDD) are two main domain-based one-class (kernel) classifiers. To reveal their relationship with density estimation in the case of the Gaussian kernel, OCSVM and SVDD are firstly unified into the framework of kernel density estimation, and the essential relationship between them is explicitly revealed. Then the result proves that the density estimation induced by OCSVM or SVDD is in agreement with the true density. Meanwhile, it can also reduce the integrated squared error (ISE). Finally, experiments on several simulated datasets verify the revealed relationships.展开更多
A novel particle filter bandwidth adaption for kernel particle filter (BAKPF) is proposed. Selection of the kernel bandwidth is a critical issue in kernel density estimation (KDE). The plug-in method is adopted to...A novel particle filter bandwidth adaption for kernel particle filter (BAKPF) is proposed. Selection of the kernel bandwidth is a critical issue in kernel density estimation (KDE). The plug-in method is adopted to get the global fixed bandwidth by optimizing the asymptotic mean integrated squared error (AMISE) firstly. Then, particle-driven bandwidth selection is invoked in the KDE. To get a more effective allocation of the particles, the KDE with adap- tive bandwidth in the BAKPF is used to approximate the posterior probability density function (PDF) by moving particles toward the posterior. A closed-form expression of the true distribution is given. The simulation results show that the proposed BAKPF performs better than the standard particle filter (PF), unscented particle filter (UPF) and the kernel particle filter (KPF) both in efficiency and estimation precision.展开更多
Aiming at the large cost of calculating variable bandwidth kernel particle filter and the high complexity of its algorithm,a self-adjusting kernel function particle filter is presented. Kernel density estimation is fa...Aiming at the large cost of calculating variable bandwidth kernel particle filter and the high complexity of its algorithm,a self-adjusting kernel function particle filter is presented. Kernel density estimation is facilitated to iterate and obtain new particle set. And the standard deviation of particle is introduced in the kernel bandwidth. According to the characteristics of particle distribution,the bandwidth is dynamically adjusted,and the particle distribution can thus be more close to the posterior probability density model of the system. Meanwhile,the kernel density is used to estimate the weight of updating particle and the system state. The simulation results show the feasibility and effectiveness of the proposed algorithm.展开更多
There have been many papers presenting kernel density estimators for a strictly stationary continuous time process observed over the time interval [0, T ]. However the estimators do not satisfy the property of mean-sq...There have been many papers presenting kernel density estimators for a strictly stationary continuous time process observed over the time interval [0, T ]. However the estimators do not satisfy the property of mean-square continuity if the process is mean-square continuous. In this paper we present a modified kernel estimator and substantiate that the modified estimator satisfies the property of mean-square continuity. In a simulation study the results show the modified estimator is better than the original estimator in some cases.展开更多
To solve the mismatch between the candidate model and the reference model caused by the time change of the tracked head, a novel mean shift algorithm based on a fusion model is provided. A fusion model is employed to ...To solve the mismatch between the candidate model and the reference model caused by the time change of the tracked head, a novel mean shift algorithm based on a fusion model is provided. A fusion model is employed to describe the tracked head by sampling the models of the fore-head and the back-head under different situations. Thus the fusion head reference model is represented by the color distribution estimated from both the fore- head and the back-head. The proposed tracking system is efficient and it is easy to realize the goal of continual tracking of the head by using the fusion model. The results show that the new tracker is robust up to a 360°rotation of the head on a cluttered background and the tracking precision is improved.展开更多
When analyzing and evaluating risks in insurance, people are often confronted with the situation of incomplete information and insufficient data, which is known as a small-sample problem. In this paper, a one-dimensio...When analyzing and evaluating risks in insurance, people are often confronted with the situation of incomplete information and insufficient data, which is known as a small-sample problem. In this paper, a one-dimensional small-sample problem in insurance was investigated using the kernel density estimation method (KerM) and general limited information diffusion method (GIDM). In particular, MacCormack technique was applied to get the solutions of GIDM equations and then the optimal diffusion solution was acquired based on the two optimization principles. Finally, the analysis introduced in this paper was verified by treating some examples and satisfying results were obtained.展开更多
With the uncertainties related to operating conditions,in-service non-destructive testing(NDT) measurements and material properties considered in the structural integrity assessment,probabilistic analysis based on t...With the uncertainties related to operating conditions,in-service non-destructive testing(NDT) measurements and material properties considered in the structural integrity assessment,probabilistic analysis based on the failure assessment diagram(FAD) approach has recently become an important concern.However,the point density revealing the probabilistic distribution characteristics of the assessment points is usually ignored.To obtain more detailed and direct knowledge from the reliability analysis,an improved probabilistic fracture mechanics(PFM) assessment method is proposed.By integrating 2D kernel density estimation(KDE) technology into the traditional probabilistic assessment,the probabilistic density of the randomly distributed assessment points is visualized in the assessment diagram.Moreover,a modified interval sensitivity analysis is implemented and compared with probabilistic sensitivity analysis.The improved reliability analysis method is applied to the assessment of a high pressure pipe containing an axial internal semi-elliptical surface crack.The results indicate that these two methods can give consistent sensitivities of input parameters,but the interval sensitivity analysis is computationally more efficient.Meanwhile,the point density distribution and its contour are plotted in the FAD,thereby better revealing the characteristics of PFM assessment.This study provides a powerful tool for the reliability analysis of critical structures.展开更多
An accurate probability distribution model of wind speed is critical to the assessment of reliability contribution of wind energy to power systems. Most of current models are built using the parametric density estimat...An accurate probability distribution model of wind speed is critical to the assessment of reliability contribution of wind energy to power systems. Most of current models are built using the parametric density estimation(PDE) methods, which usually assume that the wind speed are subordinate to a certain known distribution(e.g. Weibull distribution and Normal distribution) and estimate the parameters of models with the historical data. This paper presents a kernel density estimation(KDE) method which is a nonparametric way to estimate the probability density function(PDF) of wind speed. The method is a kind of data-driven approach without making any assumption on the form of the underlying wind speed distribution, and capable of uncovering the statistical information hidden in the historical data. The proposed method is compared with three parametric models using wind data from six sites.The results indicate that the KDE outperforms the PDE in terms of accuracy and flexibility in describing the longterm wind speed distributions for all sites. A sensitivity analysis with respect to kernel functions is presented and Gauss kernel function is proved to be the best one. Case studies on a standard IEEE reliability test system(IEEERTS) have verified the applicability and effectiveness of the proposed model in evaluating the reliability performance of wind farms.展开更多
基金the financial support provided by the National Key Research and Development Program for Young Scientists(No.2021YFC2900400)Postdoctoral Fellowship Program of China Postdoctoral Science Foundation(CPSF)(No.GZB20230914)+2 种基金National Natural Science Foundation of China(No.52304123)China Postdoctoral Science Foundation(No.2023M730412)Chongqing Outstanding Youth Science Foundation Program(No.CSTB2023NSCQ-JQX0027).
文摘Monitoring sensors in complex engineering environments often record abnormal data,leading to significant positioning errors.To reduce the influence of abnormal arrival times,we introduce an innovative,outlier-robust localization method that integrates kernel density estimation(KDE)with damping linear correction to enhance the precision of microseismic/acoustic emission(MS/AE)source positioning.Our approach systematically addresses abnormal arrival times through a three-step process:initial location by 4-arrival combinations,elimination of outliers based on three-dimensional KDE,and refinement using a linear correction with an adaptive damping factor.We validate our method through lead-breaking experiments,demonstrating over a 23%improvement in positioning accuracy with a maximum error of 9.12 mm(relative error of 15.80%)—outperforming 4 existing methods.Simulations under various system errors,outlier scales,and ratios substantiate our method’s superior performance.Field blasting experiments also confirm the practical applicability,with an average positioning error of 11.71 m(relative error of 7.59%),compared to 23.56,66.09,16.95,and 28.52 m for other methods.This research is significant as it enhances the robustness of MS/AE source localization when confronted with data anomalies.It also provides a practical solution for real-world engineering and safety monitoring applications.
文摘In real-world applications, datasets frequently contain outliers, which can hinder the generalization ability of machine learning models. Bayesian classifiers, a popular supervised learning method, rely on accurate probability density estimation for classifying continuous datasets. However, achieving precise density estimation with datasets containing outliers poses a significant challenge. This paper introduces a Bayesian classifier that utilizes optimized robust kernel density estimation to address this issue. Our proposed method enhances the accuracy of probability density distribution estimation by mitigating the impact of outliers on the training sample’s estimated distribution. Unlike the conventional kernel density estimator, our robust estimator can be seen as a weighted kernel mapping summary for each sample. This kernel mapping performs the inner product in the Hilbert space, allowing the kernel density estimation to be considered the average of the samples’ mapping in the Hilbert space using a reproducing kernel. M-estimation techniques are used to obtain accurate mean values and solve the weights. Meanwhile, complete cross-validation is used as the objective function to search for the optimal bandwidth, which impacts the estimator. The Harris Hawks Optimisation optimizes the objective function to improve the estimation accuracy. The experimental results show that it outperforms other optimization algorithms regarding convergence speed and objective function value during the bandwidth search. The optimal robust kernel density estimator achieves better fitness performance than the traditional kernel density estimator when the training data contains outliers. The Naïve Bayesian with optimal robust kernel density estimation improves the generalization in the classification with outliers.
基金supported by Division of Mathematical Sciences[grant number 1916467].
文摘Controlled experiments are widely used in many applications to investigate the causal relationship between input factors and experimental outcomes.A completely randomised design is usually used to randomly assign treatment levels to experimental units.When covariates of the experimental units are available,the experimental design should achieve covariate balancing among the treatment groups,such that the statistical inference of the treatment effects is not confounded with any possible effects of covariates.However,covariate imbalance often exists,because the experiment is carried out based on a single realisation of the complete randomisation.It is more likely to occur and worsen when the size of the experimental units is small or moderate.In this paper,we introduce a new covariate balancing criterion,which measures the differences between kernel density estimates of the covariates of treatment groups.To achieve covariate balance before the treatments are randomly assigned,we partition the experimental units by minimising the criterion,then randomly assign the treatment levels to the partitioned groups.Through numerical examples,weshow that the proposed partition approach can improve the accuracy of the difference-in-mean estimator and outperforms the complete randomisation and rerandomisation approaches.
文摘Let {Xn, n≥1} be a strictly stationary sequence of random variables, which are either associated or negatively associated, f(.) be their common density. In this paper, the author shows a central limit theorem for a kernel estimate of f(.) under certain regular conditions.
基金Under the auspices of National Natural Science Foundation of China(No.41371190,31021001)Scientific and Tech-nical Projects of Western China Transportation Construction,Ministry of Transport of China(No.2008-318-799-17)
文摘Road network is a critical component of public infrastructure,and the supporting system of social and economic development.Based on a modified kernel density estimate(KDE)algorithm,this study evaluated the road service capacity provided by a road network composed of multi-level roads(i.e.national,provincial,county and rural roads),by taking account of the differences of effect extent and intensity for roads of different levels.Summarized at town scale,the population burden and the annual rural economic income of unit road service capacity were used as the surrogates of social and economic demands for road service.This method was applied to the road network of the Three Parallel River Region,the northwestern Yunnan Province,China to evaluate the development of road network in this region.In results,the total road length of this region in 2005 was 3.70×104km,and the length ratio between national,provincial,county and rural roads was 1∶2∶8∶47.From 1989 to 2005,the regional road service capacity increased by 13.1%,of which the contributions from the national,provincial,county and rural roads were 11.1%,19.4%,22.6%,and 67.8%,respectively,revealing the effect of′All Village Accessible′policy of road development in the mountainous regions in the last decade.The spatial patterns of population burden and economic requirement of unit road service suggested that the areas farther away from the national and provincial roads have higher road development priority(RDP).Based on the modified KDE model and the framework of RDP evaluation,this study provided a useful approach for developing an optimal plan of road development at regional scale.
基金supported by Science and Technology project of the State Grid Corporation of China“Research on Active Development Planning Technology and Comprehensive Benefit Analysis Method for Regional Smart Grid Comprehensive Demonstration Zone”National Natural Science Foundation of China(51607104)
文摘In the process of large-scale,grid-connected wind power operations,it is important to establish an accurate probability distribution model for wind farm fluctuations.In this study,a wind power fluctuation modeling method is proposed based on the method of moving average and adaptive nonparametric kernel density estimation(NPKDE)method.Firstly,the method of moving average is used to reduce the fluctuation of the sampling wind power component,and the probability characteristics of the modeling are then determined based on the NPKDE.Secondly,the model is improved adaptively,and is then solved by using constraint-order optimization.The simulation results show that this method has a better accuracy and applicability compared with the modeling method based on traditional parameter estimation,and solves the local adaptation problem of traditional NPKDE.
基金Project(61101185) supported by the National Natural Science Foundation of ChinaProject(2011AA1221) supported by the National High Technology Research and Development Program of China
文摘In order to improve the performance of the probability hypothesis density(PHD) algorithm based particle filter(PF) in terms of number estimation and states extraction of multiple targets, a new probability hypothesis density filter algorithm based on marginalized particle and kernel density estimation is proposed, which utilizes the idea of marginalized particle filter to enhance the estimating performance of the PHD. The state variables are decomposed into linear and non-linear parts. The particle filter is adopted to predict and estimate the nonlinear states of multi-target after dimensionality reduction, while the Kalman filter is applied to estimate the linear parts under linear Gaussian condition. Embedding the information of the linear states into the estimated nonlinear states helps to reduce the estimating variance and improve the accuracy of target number estimation. The meanshift kernel density estimation, being of the inherent nature of searching peak value via an adaptive gradient ascent iteration, is introduced to cluster particles and extract target states, which is independent of the target number and can converge to the local peak position of the PHD distribution while avoiding the errors due to the inaccuracy in modeling and parameters estimation. Experiments show that the proposed algorithm can obtain higher tracking accuracy when using fewer sampling particles and is of lower computational complexity compared with the PF-PHD.
基金Supported by the Fundamental Research Funds for the Central Universities (No. NS2012093)
文摘In this paper, we propose a new method that combines collage error in fractal domain and Hu moment invariants for image retrieval with a statistical method - variable bandwidth Kernel Density Estimation (KDE). The proposed method is called CHK (KDE of Collage error and Hu moment) and it is tested on the Vistex texture database with 640 natural images. Experimental results show that the Average Retrieval Rate (ARR) can reach into 78.18%, which demonstrates that the proposed method performs better than the one with parameters respectively as well as the commonly used histogram method both on retrieval rate and retrieval time.
基金Project supported by National Basic Research Program of Chinaon Urban Traffic Monitoring and Management System(Grant No .TG1998030408)
文摘A novel diversity-sampling based nonparametric multi-modal background model is proposed. Using the samples having more popular and various intensity values in the training sequence, a nonparametric model is built for background subtraction. According to the related intensifies, different weights are given to the distinct samples in kernel density estimation. This avoids repeated computation using all samples, and makes computation more efficient in the evaluation phase. Experimental results show the validity of the diversity- sampling scheme and robustness of the proposed model in moving objects segmentation. The proposed algorithm can be used in outdoor surveillance systems.
文摘A kernel density estimator is proposed when tile data are subject to censorship in multivariate case. The asymptotic normality, strong convergence and asymptotic optimal bandwidth which minimize the mean square error of the estimator are studied.
基金Beijing Zheshe Base Construction Project:Research on Urban Renewal and Comprehensive Environmental Management of the Old Community in Beijing(110051360022XN121-05)。
文摘Beijing Xianyukou Hutong(hutong refers to historical and cultural block in Chinese)occupies an important geographical location with unique urban fabric,and after years of renewal and protection,the commercial space of Xianyukou Street and has gained some recognition.This article Xianyukou takes commercial hutong in Beijing as an example,spatial analysis was carried out using methods like GIS kernel density method,space syntax after site investigation and research.Based on the street space problems found,this paper then puts forward strategies to improve and upgrade Xianyukou Street’s commercial space and improve businesses in Xianyukou Street and other similar hutong.
基金Supported by the National Natural Science Foundation of China(60603029)the Natural Science Foundation of Jiangsu Province(BK2007074)the Natural Science Foundation for Colleges and Universities in Jiangsu Province(06KJB520132)~~
文摘One-class support vector machine (OCSVM) and support vector data description (SVDD) are two main domain-based one-class (kernel) classifiers. To reveal their relationship with density estimation in the case of the Gaussian kernel, OCSVM and SVDD are firstly unified into the framework of kernel density estimation, and the essential relationship between them is explicitly revealed. Then the result proves that the density estimation induced by OCSVM or SVDD is in agreement with the true density. Meanwhile, it can also reduce the integrated squared error (ISE). Finally, experiments on several simulated datasets verify the revealed relationships.
基金supported by the National Natural Science Foundation of China (60736043 60805012)the Fundamental Research Funds for the Central Universities (K50510020032)
文摘A novel particle filter bandwidth adaption for kernel particle filter (BAKPF) is proposed. Selection of the kernel bandwidth is a critical issue in kernel density estimation (KDE). The plug-in method is adopted to get the global fixed bandwidth by optimizing the asymptotic mean integrated squared error (AMISE) firstly. Then, particle-driven bandwidth selection is invoked in the KDE. To get a more effective allocation of the particles, the KDE with adap- tive bandwidth in the BAKPF is used to approximate the posterior probability density function (PDF) by moving particles toward the posterior. A closed-form expression of the true distribution is given. The simulation results show that the proposed BAKPF performs better than the standard particle filter (PF), unscented particle filter (UPF) and the kernel particle filter (KPF) both in efficiency and estimation precision.
基金Supported by the National Natural Science Foundation of China(60972059)the General Project of Science and Technology of Xuzhou City(XM12B002)
文摘Aiming at the large cost of calculating variable bandwidth kernel particle filter and the high complexity of its algorithm,a self-adjusting kernel function particle filter is presented. Kernel density estimation is facilitated to iterate and obtain new particle set. And the standard deviation of particle is introduced in the kernel bandwidth. According to the characteristics of particle distribution,the bandwidth is dynamically adjusted,and the particle distribution can thus be more close to the posterior probability density model of the system. Meanwhile,the kernel density is used to estimate the weight of updating particle and the system state. The simulation results show the feasibility and effectiveness of the proposed algorithm.
基金Project supported by the National Natural Science Foundation of China (Grant No.60773081)the Shanghai Leading Academic Discipline Project (Grant No.S30104)
文摘There have been many papers presenting kernel density estimators for a strictly stationary continuous time process observed over the time interval [0, T ]. However the estimators do not satisfy the property of mean-square continuity if the process is mean-square continuous. In this paper we present a modified kernel estimator and substantiate that the modified estimator satisfies the property of mean-square continuity. In a simulation study the results show the modified estimator is better than the original estimator in some cases.
基金The National Natural Science Foundation of China(No.60672094,60673188,U0735004)the National High Technology Research and Development Program of China(863 Program)(No.2008AA01Z303)the National Basic Research Program of China (973 Program)(No.2009CB320804)
文摘To solve the mismatch between the candidate model and the reference model caused by the time change of the tracked head, a novel mean shift algorithm based on a fusion model is provided. A fusion model is employed to describe the tracked head by sampling the models of the fore-head and the back-head under different situations. Thus the fusion head reference model is represented by the color distribution estimated from both the fore- head and the back-head. The proposed tracking system is efficient and it is easy to realize the goal of continual tracking of the head by using the fusion model. The results show that the new tracker is robust up to a 360°rotation of the head on a cluttered background and the tracking precision is improved.
基金Project supported by the National Natural Science Foundation of China (Grant No.10271072)
文摘When analyzing and evaluating risks in insurance, people are often confronted with the situation of incomplete information and insufficient data, which is known as a small-sample problem. In this paper, a one-dimensional small-sample problem in insurance was investigated using the kernel density estimation method (KerM) and general limited information diffusion method (GIDM). In particular, MacCormack technique was applied to get the solutions of GIDM equations and then the optimal diffusion solution was acquired based on the two optimization principles. Finally, the analysis introduced in this paper was verified by treating some examples and satisfying results were obtained.
基金supported by National Department Public Benefit Research Foundation of China (Grant No. 200810411)
文摘With the uncertainties related to operating conditions,in-service non-destructive testing(NDT) measurements and material properties considered in the structural integrity assessment,probabilistic analysis based on the failure assessment diagram(FAD) approach has recently become an important concern.However,the point density revealing the probabilistic distribution characteristics of the assessment points is usually ignored.To obtain more detailed and direct knowledge from the reliability analysis,an improved probabilistic fracture mechanics(PFM) assessment method is proposed.By integrating 2D kernel density estimation(KDE) technology into the traditional probabilistic assessment,the probabilistic density of the randomly distributed assessment points is visualized in the assessment diagram.Moreover,a modified interval sensitivity analysis is implemented and compared with probabilistic sensitivity analysis.The improved reliability analysis method is applied to the assessment of a high pressure pipe containing an axial internal semi-elliptical surface crack.The results indicate that these two methods can give consistent sensitivities of input parameters,but the interval sensitivity analysis is computationally more efficient.Meanwhile,the point density distribution and its contour are plotted in the FAD,thereby better revealing the characteristics of PFM assessment.This study provides a powerful tool for the reliability analysis of critical structures.
基金supported in part by the National Natural Science Foundation of China(No.51307185)Natural Science Foundation Project of CQ CSTC(No.cstc2012jjA90004)the Fundamental Research Funds for the Central Universities(No.CDJPY12150002)
文摘An accurate probability distribution model of wind speed is critical to the assessment of reliability contribution of wind energy to power systems. Most of current models are built using the parametric density estimation(PDE) methods, which usually assume that the wind speed are subordinate to a certain known distribution(e.g. Weibull distribution and Normal distribution) and estimate the parameters of models with the historical data. This paper presents a kernel density estimation(KDE) method which is a nonparametric way to estimate the probability density function(PDF) of wind speed. The method is a kind of data-driven approach without making any assumption on the form of the underlying wind speed distribution, and capable of uncovering the statistical information hidden in the historical data. The proposed method is compared with three parametric models using wind data from six sites.The results indicate that the KDE outperforms the PDE in terms of accuracy and flexibility in describing the longterm wind speed distributions for all sites. A sensitivity analysis with respect to kernel functions is presented and Gauss kernel function is proved to be the best one. Case studies on a standard IEEE reliability test system(IEEERTS) have verified the applicability and effectiveness of the proposed model in evaluating the reliability performance of wind farms.