This paper addresses the problem of predicting population density leveraging cellular station data.As wireless communication devices are commonly used,cellular station data has become integral for estimating populatio...This paper addresses the problem of predicting population density leveraging cellular station data.As wireless communication devices are commonly used,cellular station data has become integral for estimating population figures and studying their movement,thereby implying significant contributions to urban planning.However,existing research grapples with issues pertinent to preprocessing base station data and the modeling of population prediction.To address this,we propose methodologies for preprocessing cellular station data to eliminate any irregular or redundant data.The preprocessing reveals a distinct cyclical characteristic and high-frequency variation in population shift.Further,we devise a multi-view enhancement model grounded on the Transformer(MVformer),targeting the improvement of the accuracy of extended time-series population predictions.Comparative experiments,conducted on the above-mentioned population dataset using four alternate Transformer-based models,indicate that our proposedMVformer model enhances prediction accuracy by approximately 30%for both univariate and multivariate time-series prediction assignments.The performance of this model in tasks pertaining to population prediction exhibits commendable results.展开更多
In a crowd density estimation dataset,the annotation of crowd locations is an extremely laborious task,and they are not taken into the evaluation metrics.In this paper,we aim to reduce the annotation cost of crowd dat...In a crowd density estimation dataset,the annotation of crowd locations is an extremely laborious task,and they are not taken into the evaluation metrics.In this paper,we aim to reduce the annotation cost of crowd datasets,and propose a crowd density estimation method based on weakly-supervised learning,in the absence of crowd position supervision information,which directly reduces the number of crowds by using the number of pedestrians in the image as the supervised information.For this purpose,we design a new training method,which exploits the correlation between global and local image features by incremental learning to train the network.Specifically,we design a parent-child network(PC-Net)focusing on the global and local image respectively,and propose a linear feature calibration structure to train the PC-Net simultaneously,and the child network learns feature transfer factors and feature bias weights,and uses the transfer factors and bias weights to linearly feature calibrate the features extracted from the Parent network,to improve the convergence of the network by using local features hidden in the crowd images.In addition,we use the pyramid vision transformer as the backbone of the PC-Net to extract crowd features at different levels,and design a global-local feature loss function(L2).We combine it with a crowd counting loss(LC)to enhance the sensitivity of the network to crowd features during the training process,which effectively improves the accuracy of crowd density estimation.The experimental results show that the PC-Net significantly reduces the gap between fullysupervised and weakly-supervised crowd density estimation,and outperforms the comparison methods on five datasets of Shanghai Tech Part A,ShanghaiTech Part B,UCF_CC_50,UCF_QNRF and JHU-CROWD++.展开更多
Random sample partition(RSP)is a newly developed big data representation and management model to deal with big data approximate computation problems.Academic research and practical applications have confirmed that RSP...Random sample partition(RSP)is a newly developed big data representation and management model to deal with big data approximate computation problems.Academic research and practical applications have confirmed that RSP is an efficient solution for big data processing and analysis.However,a challenge for implementing RSP is determining an appropriate sample size for RSP data blocks.While a large sample size increases the burden of big data computation,a small size will lead to insufficient distribution information for RSP data blocks.To address this problem,this paper presents a novel density estimation-based method(DEM)to determine the optimal sample size for RSP data blocks.First,a theoretical sample size is calculated based on the multivariate Dvoretzky-Kiefer-Wolfowitz(DKW)inequality by using the fixed-point iteration(FPI)method.Second,a practical sample size is determined by minimizing the validation error of a kernel density estimator(KDE)constructed on RSP data blocks for an increasing sample size.Finally,a series of persuasive experiments are conducted to validate the feasibility,rationality,and effectiveness of DEM.Experimental results show that(1)the iteration function of the FPI method is convergent for calculating the theoretical sample size from the multivariate DKW inequality;(2)the KDE constructed on RSP data blocks with sample size determined by DEM can yield a good approximation of the probability density function(p.d.f);and(3)DEM provides more accurate sample sizes than the existing sample size determination methods from the perspective of p.d.f.estimation.This demonstrates that DEM is a viable approach to deal with the sample size determination problem for big data RSP implementation.展开更多
A prediction framework based on the evolution of pattern motion probability density is proposed for the output prediction and estimation problem of non-Newtonian mechanical systems,assuming that the system satisfies t...A prediction framework based on the evolution of pattern motion probability density is proposed for the output prediction and estimation problem of non-Newtonian mechanical systems,assuming that the system satisfies the generalized Lipschitz condition.As a complex nonlinear system primarily governed by statistical laws rather than Newtonian mechanics,the output of non-Newtonian mechanics systems is difficult to describe through deterministic variables such as state variables,which poses difficulties in predicting and estimating the system’s output.In this article,the temporal variation of the system is described by constructing pattern category variables,which are non-deterministic variables.Since pattern category variables have statistical attributes but not operational attributes,operational attributes are assigned to them by posterior probability density,and a method for analyzing their motion laws using probability density evolution is proposed.Furthermore,a data-driven form of pattern motion probabilistic density evolution prediction method is designed by combining pseudo partial derivative(PPD),achieving prediction of the probability density satisfying the system’s output uncertainty.Based on this,the final prediction estimation of the system’s output value is realized by minimum variance unbiased estimation.Finally,a corresponding PPD estimation algorithm is designed using an extended state observer(ESO)to estimate the parameters to be estimated in the proposed prediction method.The effectiveness of the parameter estimation algorithm and prediction method is demonstrated through theoretical analysis,and the accuracy of the algorithm is verified by two numerical simulation examples.展开更多
Monitoring sensors in complex engineering environments often record abnormal data,leading to significant positioning errors.To reduce the influence of abnormal arrival times,we introduce an innovative,outlier-robust l...Monitoring sensors in complex engineering environments often record abnormal data,leading to significant positioning errors.To reduce the influence of abnormal arrival times,we introduce an innovative,outlier-robust localization method that integrates kernel density estimation(KDE)with damping linear correction to enhance the precision of microseismic/acoustic emission(MS/AE)source positioning.Our approach systematically addresses abnormal arrival times through a three-step process:initial location by 4-arrival combinations,elimination of outliers based on three-dimensional KDE,and refinement using a linear correction with an adaptive damping factor.We validate our method through lead-breaking experiments,demonstrating over a 23%improvement in positioning accuracy with a maximum error of 9.12 mm(relative error of 15.80%)—outperforming 4 existing methods.Simulations under various system errors,outlier scales,and ratios substantiate our method’s superior performance.Field blasting experiments also confirm the practical applicability,with an average positioning error of 11.71 m(relative error of 7.59%),compared to 23.56,66.09,16.95,and 28.52 m for other methods.This research is significant as it enhances the robustness of MS/AE source localization when confronted with data anomalies.It also provides a practical solution for real-world engineering and safety monitoring applications.展开更多
In real-world applications, datasets frequently contain outliers, which can hinder the generalization ability of machine learning models. Bayesian classifiers, a popular supervised learning method, rely on accurate pr...In real-world applications, datasets frequently contain outliers, which can hinder the generalization ability of machine learning models. Bayesian classifiers, a popular supervised learning method, rely on accurate probability density estimation for classifying continuous datasets. However, achieving precise density estimation with datasets containing outliers poses a significant challenge. This paper introduces a Bayesian classifier that utilizes optimized robust kernel density estimation to address this issue. Our proposed method enhances the accuracy of probability density distribution estimation by mitigating the impact of outliers on the training sample’s estimated distribution. Unlike the conventional kernel density estimator, our robust estimator can be seen as a weighted kernel mapping summary for each sample. This kernel mapping performs the inner product in the Hilbert space, allowing the kernel density estimation to be considered the average of the samples’ mapping in the Hilbert space using a reproducing kernel. M-estimation techniques are used to obtain accurate mean values and solve the weights. Meanwhile, complete cross-validation is used as the objective function to search for the optimal bandwidth, which impacts the estimator. The Harris Hawks Optimisation optimizes the objective function to improve the estimation accuracy. The experimental results show that it outperforms other optimization algorithms regarding convergence speed and objective function value during the bandwidth search. The optimal robust kernel density estimator achieves better fitness performance than the traditional kernel density estimator when the training data contains outliers. The Naïve Bayesian with optimal robust kernel density estimation improves the generalization in the classification with outliers.展开更多
One-class support vector machine (OCSVM) and support vector data description (SVDD) are two main domain-based one-class (kernel) classifiers. To reveal their relationship with density estimation in the case of t...One-class support vector machine (OCSVM) and support vector data description (SVDD) are two main domain-based one-class (kernel) classifiers. To reveal their relationship with density estimation in the case of the Gaussian kernel, OCSVM and SVDD are firstly unified into the framework of kernel density estimation, and the essential relationship between them is explicitly revealed. Then the result proves that the density estimation induced by OCSVM or SVDD is in agreement with the true density. Meanwhile, it can also reduce the integrated squared error (ISE). Finally, experiments on several simulated datasets verify the revealed relationships.展开更多
The application of frequency distribution statistics to data provides objective means to assess the nature of the data distribution and viability of numerical models that are used to visualize and interpret data.Two c...The application of frequency distribution statistics to data provides objective means to assess the nature of the data distribution and viability of numerical models that are used to visualize and interpret data.Two commonly used tools are the kernel density estimation and reduced chi-squared statistic used in combination with a weighted mean.Due to the wide applicability of these tools,we present a Java-based computer application called KDX to facilitate the visualization of data and the utilization of these numerical tools.展开更多
Abstract Data-driven tools, such as principal component analysis (PCA) and independent component analysis (ICA) have been applied to different benchmarks as process monitoring methods. The difference between the t...Abstract Data-driven tools, such as principal component analysis (PCA) and independent component analysis (ICA) have been applied to different benchmarks as process monitoring methods. The difference between the two methods is that the components of PCA are still dependent while ICA has no orthogonality constraint and its latentvariables are independent. Process monitoring with PCA often supposes that process data or principal components is Gaussian distribution. However, this kind of constraint cannot be satisfied by several practical processes. To ex-tend the use of PCA, a nonparametric method is added to PCA to overcome the difficulty, and kernel density estimation (KDE) is rather a good choice. Though ICA is based on non-Gaussian distribution intormation, .KDE can help in the close monitoring of the data. Methods, such as PCA, ICA, PCA.with .KDE(KPCA), and ICA with KDE,(KICA), are demonstrated and. compared by applying them to a practical industnal Spheripol craft polypropylene catalyzer reactor instead of a laboratory emulator.展开更多
In this paper,we consider the limit distribution of the error density function estima-tor in the rst-order autoregressive models with negatively associated and positively associated random errors.Under mild regularity...In this paper,we consider the limit distribution of the error density function estima-tor in the rst-order autoregressive models with negatively associated and positively associated random errors.Under mild regularity assumptions,some asymptotic normality results of the residual density estimator are obtained when the autoregressive models are stationary process and explosive process.In order to illustrate these results,some simulations such as con dence intervals and mean integrated square errors are provided in this paper.It shows that the residual density estimator can replace the density\estimator"which contains errors.展开更多
Crowd density is an important factor of crowd stability.Previous crowd density estimation methods are highly dependent on the specific video scene.This paper presented a video scene invariant crowd density estimation ...Crowd density is an important factor of crowd stability.Previous crowd density estimation methods are highly dependent on the specific video scene.This paper presented a video scene invariant crowd density estimation method using Geographic Information Systems(GIS) to monitor crowd size for large areas.The proposed method mapped crowd images to GIS.Then we can estimate crowd density for each camera in GIS using an estimation model obtained by one camera.Test results show that one model obtained by one camera in GIS can be adaptively applied to other cameras in outdoor video scenes.A real-time monitoring system for crowd size in large areas based on scene invariant model has been successfully used in 'Jiangsu Qinhuai Lantern Festival,2012'.It can provide early warning information and scientific basis for safety and security decision making.展开更多
This study examines a new methodology to predict the final seismic mortality from earthquakes in China. Most studies established the association between mortality estimation and seismic intensity without considering t...This study examines a new methodology to predict the final seismic mortality from earthquakes in China. Most studies established the association between mortality estimation and seismic intensity without considering the population density. In China, however, the data are not always available, especially when it comes to the very urgent relief situation in the disaster. And the popu- lation density varies greatly from region to region. This motivates the development of empirical models that use historical death data to provide the path to analyze the death tolls for earthquakes. The present paper employs the average population density to predict the final death tolls in earthquakes using a case-based reasoning model from realistic perspective. To validate the forecasting results, historical data from 18 large-scale earthquakes occurred in China are used to estimate the seismic morality of each case. And a typical earthquake case occurred in the northwest of Sichuan Province is employed to demonstrate the estimation of final death toll. The strength of this paper is that it provides scientific methods with overall forecast errors lower than 20 %, and opens the door for conducting final death forecasts with a qualitative and quantitative approach. Limitations and future research are also analyzed and discussed in the conclusion.展开更多
We consider n observations from the GARCH-type model: Z = UY, where U and Y are independent random variables. We aim to estimate density function Y where Y have a weighted distribution. We determine a sharp upper boun...We consider n observations from the GARCH-type model: Z = UY, where U and Y are independent random variables. We aim to estimate density function Y where Y have a weighted distribution. We determine a sharp upper bound of the associated mean integrated square error. We also make use of the measure of expected true evidence, so as to determine when model leads to a crisis and causes data to be lost.展开更多
An algorithm to track multiple sharply maneuvering targets without prior knowledge about new target birth is proposed. These targets are capable of achieving sharp maneuvers within a short period of time, such as dron...An algorithm to track multiple sharply maneuvering targets without prior knowledge about new target birth is proposed. These targets are capable of achieving sharp maneuvers within a short period of time, such as drones and agile missiles.The probability hypothesis density (PHD) filter, which propagates only the first-order statistical moment of the full target posterior, has been shown to be a computationally efficient solution to multitarget tracking problems. However, the standard PHD filter operates on the single dynamic model and requires prior information about target birth distribution, which leads to many limitations in terms of practical applications. In this paper,we introduce a nonzero mean, white noise turn rate dynamic model and generalize jump Markov systems to multitarget case to accommodate sharply maneuvering dynamics. Moreover, to adaptively estimate newborn targets’information, a measurement-driven method based on the recursive random sampling consensus (RANSAC) algorithm is proposed. Simulation results demonstrate that the proposed method achieves significant improvement in tracking multiple sharply maneuvering targets with adaptive birth estimation.展开更多
The particle Probability Hypotheses Density (particle-PHD) filter is a tractable approach for Random Finite Set (RFS) Bayes estimation, but the particle-PHD filter can not directly derive the target track. Most existi...The particle Probability Hypotheses Density (particle-PHD) filter is a tractable approach for Random Finite Set (RFS) Bayes estimation, but the particle-PHD filter can not directly derive the target track. Most existing approaches combine the data association step to solve this problem. This paper proposes an algorithm which does not need the association step. Our basic ideal is based on the clustering algorithm of Finite Mixture Models (FMM). The intensity distribution is first derived by the particle-PHD filter, and then the clustering algorithm is applied to estimate the multitarget states and tracks jointly. The clustering process includes two steps: the prediction and update. The key to the proposed algorithm is to use the prediction as the initial points and the convergent points as the es- timates. Besides, Expectation-Maximization (EM) and Markov Chain Monte Carlo (MCMC) ap- proaches are used for the FMM parameter estimation.展开更多
In the process of large-scale,grid-connected wind power operations,it is important to establish an accurate probability distribution model for wind farm fluctuations.In this study,a wind power fluctuation modeling met...In the process of large-scale,grid-connected wind power operations,it is important to establish an accurate probability distribution model for wind farm fluctuations.In this study,a wind power fluctuation modeling method is proposed based on the method of moving average and adaptive nonparametric kernel density estimation(NPKDE)method.Firstly,the method of moving average is used to reduce the fluctuation of the sampling wind power component,and the probability characteristics of the modeling are then determined based on the NPKDE.Secondly,the model is improved adaptively,and is then solved by using constraint-order optimization.The simulation results show that this method has a better accuracy and applicability compared with the modeling method based on traditional parameter estimation,and solves the local adaptation problem of traditional NPKDE.展开更多
In this work,we develop an invertible transport map,called KRnet,for density estimation by coupling the Knothe–Rosenblatt(KR)rearrangement and the flow-based generative model,which generalizes the real-valued non-vol...In this work,we develop an invertible transport map,called KRnet,for density estimation by coupling the Knothe–Rosenblatt(KR)rearrangement and the flow-based generative model,which generalizes the real-valued non-volume preserving(real NVP)model(arX-iv:1605.08803v3).The triangular structure of the KR rearrangement breaks the symmetry of the real NVP in terms of the exchange of information between dimensions,which not only accelerates the training process but also improves the accuracy significantly.We have also introduced several new layers into the generative model to improve both robustness and effectiveness,including a reformulated affine coupling layer,a rotation layer and a component-wise nonlinear invertible layer.The KRnet can be used for both density estimation and sample generation especially when the dimensionality is relatively high.Numerical experiments have been presented to demonstrate the performance of KRnet.展开更多
In order to improve the performance of the probability hypothesis density(PHD) algorithm based particle filter(PF) in terms of number estimation and states extraction of multiple targets, a new probability hypothesis ...In order to improve the performance of the probability hypothesis density(PHD) algorithm based particle filter(PF) in terms of number estimation and states extraction of multiple targets, a new probability hypothesis density filter algorithm based on marginalized particle and kernel density estimation is proposed, which utilizes the idea of marginalized particle filter to enhance the estimating performance of the PHD. The state variables are decomposed into linear and non-linear parts. The particle filter is adopted to predict and estimate the nonlinear states of multi-target after dimensionality reduction, while the Kalman filter is applied to estimate the linear parts under linear Gaussian condition. Embedding the information of the linear states into the estimated nonlinear states helps to reduce the estimating variance and improve the accuracy of target number estimation. The meanshift kernel density estimation, being of the inherent nature of searching peak value via an adaptive gradient ascent iteration, is introduced to cluster particles and extract target states, which is independent of the target number and can converge to the local peak position of the PHD distribution while avoiding the errors due to the inaccuracy in modeling and parameters estimation. Experiments show that the proposed algorithm can obtain higher tracking accuracy when using fewer sampling particles and is of lower computational complexity compared with the PF-PHD.展开更多
Previous research has identified specific areas of frequent tropical cyclone activity in the North Atlantic basin. This study examines long-term and decadal spatio-temporal patterns of Atlantic tropical cyclone freque...Previous research has identified specific areas of frequent tropical cyclone activity in the North Atlantic basin. This study examines long-term and decadal spatio-temporal patterns of Atlantic tropical cyclone frequencies from 1944 to 2009, and analyzes categorical and decadal centroid patterns using kernel density estimation (KDE) and centrographic statistics. Results corroborate previous research which has suggested that the Bermuda-Azores anticyclone plays an integral role in the direction of tropical cyclone tracks. Other teleconnections such as the North Atlantic Oscillation (NAO) may also have an impact on tropical cyclone tracks, but at a different temporal resolution. Results expand on existing knowledge of the spatial trends of tropical cyclones based on storm category and time through the use of spatial statistics. Overall, location of peak frequency varies by tropical cyclone category, with stronger storms being more concentrated in narrow regions of the southern Caribbean Sea and Gulf of Mexico, while weaker storms occur in a much larger area that encompasses much of the Caribbean Sea, Gulf of Mexico, and Atlantic Ocean off of the east coast of the United States. Additionally, the decadal centroids of tropical cyclone tracks have oscillated over a large area of the Atlantic Ocean for much of recorded history. Data collected since 1944 can be analyzed confidently to reveal these patterns.展开更多
Crowd density estimation in wide areas is a challenging problem for visual surveillance. Because of the high risk of degeneration, the safety of public events involving large crowds has always been a major concern. In...Crowd density estimation in wide areas is a challenging problem for visual surveillance. Because of the high risk of degeneration, the safety of public events involving large crowds has always been a major concern. In this paper, we propose a video-based crowd density analysis and prediction system for wide-area surveillance applications. In monocular image sequences, the Accumulated Mosaic Image Difference (AMID) method is applied to extract crowd areas having irregular motion. The specific number of persons and velocity of a crowd can be adequately estimated by our system from the density of crowded areas. Using a multi-camera network, we can obtain predictions of a crowd's density several minutes in advance. The system has been used in real applications, and numerous experiments conducted in real scenes (station, park, plaza) demonstrate the effectiveness and robustness of the proposed method.展开更多
基金Guangdong Basic and Applied Basic Research Foundation under Grant No.2024A1515012485in part by the Shenzhen Fundamental Research Program under Grant JCYJ20220810112354002.
文摘This paper addresses the problem of predicting population density leveraging cellular station data.As wireless communication devices are commonly used,cellular station data has become integral for estimating population figures and studying their movement,thereby implying significant contributions to urban planning.However,existing research grapples with issues pertinent to preprocessing base station data and the modeling of population prediction.To address this,we propose methodologies for preprocessing cellular station data to eliminate any irregular or redundant data.The preprocessing reveals a distinct cyclical characteristic and high-frequency variation in population shift.Further,we devise a multi-view enhancement model grounded on the Transformer(MVformer),targeting the improvement of the accuracy of extended time-series population predictions.Comparative experiments,conducted on the above-mentioned population dataset using four alternate Transformer-based models,indicate that our proposedMVformer model enhances prediction accuracy by approximately 30%for both univariate and multivariate time-series prediction assignments.The performance of this model in tasks pertaining to population prediction exhibits commendable results.
基金the Humanities and Social Science Fund of the Ministry of Education of China(21YJAZH077)。
文摘In a crowd density estimation dataset,the annotation of crowd locations is an extremely laborious task,and they are not taken into the evaluation metrics.In this paper,we aim to reduce the annotation cost of crowd datasets,and propose a crowd density estimation method based on weakly-supervised learning,in the absence of crowd position supervision information,which directly reduces the number of crowds by using the number of pedestrians in the image as the supervised information.For this purpose,we design a new training method,which exploits the correlation between global and local image features by incremental learning to train the network.Specifically,we design a parent-child network(PC-Net)focusing on the global and local image respectively,and propose a linear feature calibration structure to train the PC-Net simultaneously,and the child network learns feature transfer factors and feature bias weights,and uses the transfer factors and bias weights to linearly feature calibrate the features extracted from the Parent network,to improve the convergence of the network by using local features hidden in the crowd images.In addition,we use the pyramid vision transformer as the backbone of the PC-Net to extract crowd features at different levels,and design a global-local feature loss function(L2).We combine it with a crowd counting loss(LC)to enhance the sensitivity of the network to crowd features during the training process,which effectively improves the accuracy of crowd density estimation.The experimental results show that the PC-Net significantly reduces the gap between fullysupervised and weakly-supervised crowd density estimation,and outperforms the comparison methods on five datasets of Shanghai Tech Part A,ShanghaiTech Part B,UCF_CC_50,UCF_QNRF and JHU-CROWD++.
基金This paper was supported by the National Natural Science Foundation of China(Grant No.61972261)the Natural Science Foundation of Guangdong Province(No.2023A1515011667)+1 种基金the Key Basic Research Foundation of Shenzhen(No.JCYJ20220818100205012)the Basic Research Foundation of Shenzhen(No.JCYJ20210324093609026)。
文摘Random sample partition(RSP)is a newly developed big data representation and management model to deal with big data approximate computation problems.Academic research and practical applications have confirmed that RSP is an efficient solution for big data processing and analysis.However,a challenge for implementing RSP is determining an appropriate sample size for RSP data blocks.While a large sample size increases the burden of big data computation,a small size will lead to insufficient distribution information for RSP data blocks.To address this problem,this paper presents a novel density estimation-based method(DEM)to determine the optimal sample size for RSP data blocks.First,a theoretical sample size is calculated based on the multivariate Dvoretzky-Kiefer-Wolfowitz(DKW)inequality by using the fixed-point iteration(FPI)method.Second,a practical sample size is determined by minimizing the validation error of a kernel density estimator(KDE)constructed on RSP data blocks for an increasing sample size.Finally,a series of persuasive experiments are conducted to validate the feasibility,rationality,and effectiveness of DEM.Experimental results show that(1)the iteration function of the FPI method is convergent for calculating the theoretical sample size from the multivariate DKW inequality;(2)the KDE constructed on RSP data blocks with sample size determined by DEM can yield a good approximation of the probability density function(p.d.f);and(3)DEM provides more accurate sample sizes than the existing sample size determination methods from the perspective of p.d.f.estimation.This demonstrates that DEM is a viable approach to deal with the sample size determination problem for big data RSP implementation.
文摘A prediction framework based on the evolution of pattern motion probability density is proposed for the output prediction and estimation problem of non-Newtonian mechanical systems,assuming that the system satisfies the generalized Lipschitz condition.As a complex nonlinear system primarily governed by statistical laws rather than Newtonian mechanics,the output of non-Newtonian mechanics systems is difficult to describe through deterministic variables such as state variables,which poses difficulties in predicting and estimating the system’s output.In this article,the temporal variation of the system is described by constructing pattern category variables,which are non-deterministic variables.Since pattern category variables have statistical attributes but not operational attributes,operational attributes are assigned to them by posterior probability density,and a method for analyzing their motion laws using probability density evolution is proposed.Furthermore,a data-driven form of pattern motion probabilistic density evolution prediction method is designed by combining pseudo partial derivative(PPD),achieving prediction of the probability density satisfying the system’s output uncertainty.Based on this,the final prediction estimation of the system’s output value is realized by minimum variance unbiased estimation.Finally,a corresponding PPD estimation algorithm is designed using an extended state observer(ESO)to estimate the parameters to be estimated in the proposed prediction method.The effectiveness of the parameter estimation algorithm and prediction method is demonstrated through theoretical analysis,and the accuracy of the algorithm is verified by two numerical simulation examples.
基金the financial support provided by the National Key Research and Development Program for Young Scientists(No.2021YFC2900400)Postdoctoral Fellowship Program of China Postdoctoral Science Foundation(CPSF)(No.GZB20230914)+2 种基金National Natural Science Foundation of China(No.52304123)China Postdoctoral Science Foundation(No.2023M730412)Chongqing Outstanding Youth Science Foundation Program(No.CSTB2023NSCQ-JQX0027).
文摘Monitoring sensors in complex engineering environments often record abnormal data,leading to significant positioning errors.To reduce the influence of abnormal arrival times,we introduce an innovative,outlier-robust localization method that integrates kernel density estimation(KDE)with damping linear correction to enhance the precision of microseismic/acoustic emission(MS/AE)source positioning.Our approach systematically addresses abnormal arrival times through a three-step process:initial location by 4-arrival combinations,elimination of outliers based on three-dimensional KDE,and refinement using a linear correction with an adaptive damping factor.We validate our method through lead-breaking experiments,demonstrating over a 23%improvement in positioning accuracy with a maximum error of 9.12 mm(relative error of 15.80%)—outperforming 4 existing methods.Simulations under various system errors,outlier scales,and ratios substantiate our method’s superior performance.Field blasting experiments also confirm the practical applicability,with an average positioning error of 11.71 m(relative error of 7.59%),compared to 23.56,66.09,16.95,and 28.52 m for other methods.This research is significant as it enhances the robustness of MS/AE source localization when confronted with data anomalies.It also provides a practical solution for real-world engineering and safety monitoring applications.
文摘In real-world applications, datasets frequently contain outliers, which can hinder the generalization ability of machine learning models. Bayesian classifiers, a popular supervised learning method, rely on accurate probability density estimation for classifying continuous datasets. However, achieving precise density estimation with datasets containing outliers poses a significant challenge. This paper introduces a Bayesian classifier that utilizes optimized robust kernel density estimation to address this issue. Our proposed method enhances the accuracy of probability density distribution estimation by mitigating the impact of outliers on the training sample’s estimated distribution. Unlike the conventional kernel density estimator, our robust estimator can be seen as a weighted kernel mapping summary for each sample. This kernel mapping performs the inner product in the Hilbert space, allowing the kernel density estimation to be considered the average of the samples’ mapping in the Hilbert space using a reproducing kernel. M-estimation techniques are used to obtain accurate mean values and solve the weights. Meanwhile, complete cross-validation is used as the objective function to search for the optimal bandwidth, which impacts the estimator. The Harris Hawks Optimisation optimizes the objective function to improve the estimation accuracy. The experimental results show that it outperforms other optimization algorithms regarding convergence speed and objective function value during the bandwidth search. The optimal robust kernel density estimator achieves better fitness performance than the traditional kernel density estimator when the training data contains outliers. The Naïve Bayesian with optimal robust kernel density estimation improves the generalization in the classification with outliers.
基金Supported by the National Natural Science Foundation of China(60603029)the Natural Science Foundation of Jiangsu Province(BK2007074)the Natural Science Foundation for Colleges and Universities in Jiangsu Province(06KJB520132)~~
文摘One-class support vector machine (OCSVM) and support vector data description (SVDD) are two main domain-based one-class (kernel) classifiers. To reveal their relationship with density estimation in the case of the Gaussian kernel, OCSVM and SVDD are firstly unified into the framework of kernel density estimation, and the essential relationship between them is explicitly revealed. Then the result proves that the density estimation induced by OCSVM or SVDD is in agreement with the true density. Meanwhile, it can also reduce the integrated squared error (ISE). Finally, experiments on several simulated datasets verify the revealed relationships.
文摘The application of frequency distribution statistics to data provides objective means to assess the nature of the data distribution and viability of numerical models that are used to visualize and interpret data.Two commonly used tools are the kernel density estimation and reduced chi-squared statistic used in combination with a weighted mean.Due to the wide applicability of these tools,we present a Java-based computer application called KDX to facilitate the visualization of data and the utilization of these numerical tools.
基金Supported by the National Natural Science Foundation of China (No.60574047) and the Doctorate Foundation of the State Education Ministry of China (No.20050335018).
文摘Abstract Data-driven tools, such as principal component analysis (PCA) and independent component analysis (ICA) have been applied to different benchmarks as process monitoring methods. The difference between the two methods is that the components of PCA are still dependent while ICA has no orthogonality constraint and its latentvariables are independent. Process monitoring with PCA often supposes that process data or principal components is Gaussian distribution. However, this kind of constraint cannot be satisfied by several practical processes. To ex-tend the use of PCA, a nonparametric method is added to PCA to overcome the difficulty, and kernel density estimation (KDE) is rather a good choice. Though ICA is based on non-Gaussian distribution intormation, .KDE can help in the close monitoring of the data. Methods, such as PCA, ICA, PCA.with .KDE(KPCA), and ICA with KDE,(KICA), are demonstrated and. compared by applying them to a practical industnal Spheripol craft polypropylene catalyzer reactor instead of a laboratory emulator.
基金supported by the National Natural Science Foundation of China(12131015,12071422)。
文摘In this paper,we consider the limit distribution of the error density function estima-tor in the rst-order autoregressive models with negatively associated and positively associated random errors.Under mild regularity assumptions,some asymptotic normality results of the residual density estimator are obtained when the autoregressive models are stationary process and explosive process.In order to illustrate these results,some simulations such as con dence intervals and mean integrated square errors are provided in this paper.It shows that the residual density estimator can replace the density\estimator"which contains errors.
基金The authors would like to thank the reviewers for their detailed reviews and constructive comments. We are also grateful for Sophie Song's help on the improving English. This work was supported in part by the ‘Fivetwelfh' National Science and Technology Support Program of the Ministry of Science and Technology of China (No. 2012BAH35B02), the National Natural Science Foundation of China (NSFC) (No. 41401107, No. 41201402, and No. 41201417).
文摘Crowd density is an important factor of crowd stability.Previous crowd density estimation methods are highly dependent on the specific video scene.This paper presented a video scene invariant crowd density estimation method using Geographic Information Systems(GIS) to monitor crowd size for large areas.The proposed method mapped crowd images to GIS.Then we can estimate crowd density for each camera in GIS using an estimation model obtained by one camera.Test results show that one model obtained by one camera in GIS can be adaptively applied to other cameras in outdoor video scenes.A real-time monitoring system for crowd size in large areas based on scene invariant model has been successfully used in 'Jiangsu Qinhuai Lantern Festival,2012'.It can provide early warning information and scientific basis for safety and security decision making.
基金funded by the National Natural Science Foundation of China (Nos.71271069,71540015,71532004)Foundation of Beijing University of Civil Engineering and Architecture (No.ZF15069)
文摘This study examines a new methodology to predict the final seismic mortality from earthquakes in China. Most studies established the association between mortality estimation and seismic intensity without considering the population density. In China, however, the data are not always available, especially when it comes to the very urgent relief situation in the disaster. And the popu- lation density varies greatly from region to region. This motivates the development of empirical models that use historical death data to provide the path to analyze the death tolls for earthquakes. The present paper employs the average population density to predict the final death tolls in earthquakes using a case-based reasoning model from realistic perspective. To validate the forecasting results, historical data from 18 large-scale earthquakes occurred in China are used to estimate the seismic morality of each case. And a typical earthquake case occurred in the northwest of Sichuan Province is employed to demonstrate the estimation of final death toll. The strength of this paper is that it provides scientific methods with overall forecast errors lower than 20 %, and opens the door for conducting final death forecasts with a qualitative and quantitative approach. Limitations and future research are also analyzed and discussed in the conclusion.
文摘We consider n observations from the GARCH-type model: Z = UY, where U and Y are independent random variables. We aim to estimate density function Y where Y have a weighted distribution. We determine a sharp upper bound of the associated mean integrated square error. We also make use of the measure of expected true evidence, so as to determine when model leads to a crisis and causes data to be lost.
基金supported by the National Natural Science Foundation of China (61773142)。
文摘An algorithm to track multiple sharply maneuvering targets without prior knowledge about new target birth is proposed. These targets are capable of achieving sharp maneuvers within a short period of time, such as drones and agile missiles.The probability hypothesis density (PHD) filter, which propagates only the first-order statistical moment of the full target posterior, has been shown to be a computationally efficient solution to multitarget tracking problems. However, the standard PHD filter operates on the single dynamic model and requires prior information about target birth distribution, which leads to many limitations in terms of practical applications. In this paper,we introduce a nonzero mean, white noise turn rate dynamic model and generalize jump Markov systems to multitarget case to accommodate sharply maneuvering dynamics. Moreover, to adaptively estimate newborn targets’information, a measurement-driven method based on the recursive random sampling consensus (RANSAC) algorithm is proposed. Simulation results demonstrate that the proposed method achieves significant improvement in tracking multiple sharply maneuvering targets with adaptive birth estimation.
基金Supported by the National Key Fundamental Research & Development Program of China (2007CB11006)the Zhejiang Natural Science Foundation (R106745, Y1080422)
文摘The particle Probability Hypotheses Density (particle-PHD) filter is a tractable approach for Random Finite Set (RFS) Bayes estimation, but the particle-PHD filter can not directly derive the target track. Most existing approaches combine the data association step to solve this problem. This paper proposes an algorithm which does not need the association step. Our basic ideal is based on the clustering algorithm of Finite Mixture Models (FMM). The intensity distribution is first derived by the particle-PHD filter, and then the clustering algorithm is applied to estimate the multitarget states and tracks jointly. The clustering process includes two steps: the prediction and update. The key to the proposed algorithm is to use the prediction as the initial points and the convergent points as the es- timates. Besides, Expectation-Maximization (EM) and Markov Chain Monte Carlo (MCMC) ap- proaches are used for the FMM parameter estimation.
基金supported by Science and Technology project of the State Grid Corporation of China“Research on Active Development Planning Technology and Comprehensive Benefit Analysis Method for Regional Smart Grid Comprehensive Demonstration Zone”National Natural Science Foundation of China(51607104)
文摘In the process of large-scale,grid-connected wind power operations,it is important to establish an accurate probability distribution model for wind farm fluctuations.In this study,a wind power fluctuation modeling method is proposed based on the method of moving average and adaptive nonparametric kernel density estimation(NPKDE)method.Firstly,the method of moving average is used to reduce the fluctuation of the sampling wind power component,and the probability characteristics of the modeling are then determined based on the NPKDE.Secondly,the model is improved adaptively,and is then solved by using constraint-order optimization.The simulation results show that this method has a better accuracy and applicability compared with the modeling method based on traditional parameter estimation,and solves the local adaptation problem of traditional NPKDE.
基金supported by the National Natural Science Foundation of Unite States (Grants DMS-1620026 and DMS-1913163)supported by the National Natural Science Foundation of China (Grant 11601329)
文摘In this work,we develop an invertible transport map,called KRnet,for density estimation by coupling the Knothe–Rosenblatt(KR)rearrangement and the flow-based generative model,which generalizes the real-valued non-volume preserving(real NVP)model(arX-iv:1605.08803v3).The triangular structure of the KR rearrangement breaks the symmetry of the real NVP in terms of the exchange of information between dimensions,which not only accelerates the training process but also improves the accuracy significantly.We have also introduced several new layers into the generative model to improve both robustness and effectiveness,including a reformulated affine coupling layer,a rotation layer and a component-wise nonlinear invertible layer.The KRnet can be used for both density estimation and sample generation especially when the dimensionality is relatively high.Numerical experiments have been presented to demonstrate the performance of KRnet.
基金Project(61101185) supported by the National Natural Science Foundation of ChinaProject(2011AA1221) supported by the National High Technology Research and Development Program of China
文摘In order to improve the performance of the probability hypothesis density(PHD) algorithm based particle filter(PF) in terms of number estimation and states extraction of multiple targets, a new probability hypothesis density filter algorithm based on marginalized particle and kernel density estimation is proposed, which utilizes the idea of marginalized particle filter to enhance the estimating performance of the PHD. The state variables are decomposed into linear and non-linear parts. The particle filter is adopted to predict and estimate the nonlinear states of multi-target after dimensionality reduction, while the Kalman filter is applied to estimate the linear parts under linear Gaussian condition. Embedding the information of the linear states into the estimated nonlinear states helps to reduce the estimating variance and improve the accuracy of target number estimation. The meanshift kernel density estimation, being of the inherent nature of searching peak value via an adaptive gradient ascent iteration, is introduced to cluster particles and extract target states, which is independent of the target number and can converge to the local peak position of the PHD distribution while avoiding the errors due to the inaccuracy in modeling and parameters estimation. Experiments show that the proposed algorithm can obtain higher tracking accuracy when using fewer sampling particles and is of lower computational complexity compared with the PF-PHD.
文摘Previous research has identified specific areas of frequent tropical cyclone activity in the North Atlantic basin. This study examines long-term and decadal spatio-temporal patterns of Atlantic tropical cyclone frequencies from 1944 to 2009, and analyzes categorical and decadal centroid patterns using kernel density estimation (KDE) and centrographic statistics. Results corroborate previous research which has suggested that the Bermuda-Azores anticyclone plays an integral role in the direction of tropical cyclone tracks. Other teleconnections such as the North Atlantic Oscillation (NAO) may also have an impact on tropical cyclone tracks, but at a different temporal resolution. Results expand on existing knowledge of the spatial trends of tropical cyclones based on storm category and time through the use of spatial statistics. Overall, location of peak frequency varies by tropical cyclone category, with stronger storms being more concentrated in narrow regions of the southern Caribbean Sea and Gulf of Mexico, while weaker storms occur in a much larger area that encompasses much of the Caribbean Sea, Gulf of Mexico, and Atlantic Ocean off of the east coast of the United States. Additionally, the decadal centroids of tropical cyclone tracks have oscillated over a large area of the Atlantic Ocean for much of recorded history. Data collected since 1944 can be analyzed confidently to reveal these patterns.
基金supported by the National Natural Science Foundation of China under Grant No. 61175007the National Key Technologies R&D Program under Grant No. 2012BAH07B01the National Key Basic Research Program of China (973 Program) under Grant No. 2012CB316302
文摘Crowd density estimation in wide areas is a challenging problem for visual surveillance. Because of the high risk of degeneration, the safety of public events involving large crowds has always been a major concern. In this paper, we propose a video-based crowd density analysis and prediction system for wide-area surveillance applications. In monocular image sequences, the Accumulated Mosaic Image Difference (AMID) method is applied to extract crowd areas having irregular motion. The specific number of persons and velocity of a crowd can be adequately estimated by our system from the density of crowded areas. Using a multi-camera network, we can obtain predictions of a crowd's density several minutes in advance. The system has been used in real applications, and numerous experiments conducted in real scenes (station, park, plaza) demonstrate the effectiveness and robustness of the proposed method.