As a key node for the surrounding area,metro stations are closely connected with the surrounding urban space and develop cooperatively.Different types of metro stations have differences in land use and functional posi...As a key node for the surrounding area,metro stations are closely connected with the surrounding urban space and develop cooperatively.Different types of metro stations have differences in land use and functional positioning.This paper mainly used the methods of Tyson polygon,kernel density analysis and correlation analysis,based on POI data,to classify the stations of Beijing Metro Line 7.This paper made a detailed analysis of commercial subway stations,and analyzed the distribution characteristics of commercial metro stations on Line 7.展开更多
Let {Xn, n≥1} be a strictly stationary sequence of random variables, which are either associated or negatively associated, f(.) be their common density. In this paper, the author shows a central limit theorem for a k...Let {Xn, n≥1} be a strictly stationary sequence of random variables, which are either associated or negatively associated, f(.) be their common density. In this paper, the author shows a central limit theorem for a kernel estimate of f(.) under certain regular conditions.展开更多
There have been vast amount of studies on background modeling to detect moving objects. Two recent reviews[1,2] showed that kernel density estimation(KDE) method and Gaussian mixture model(GMM) perform about equally b...There have been vast amount of studies on background modeling to detect moving objects. Two recent reviews[1,2] showed that kernel density estimation(KDE) method and Gaussian mixture model(GMM) perform about equally best among possible background models. For KDE, the selection of kernel functions and their bandwidths greatly influence the performance. There were few attempts to compare the adequacy of functions for KDE. In this paper, we evaluate the performance of various functions for KDE. Functions tested include almost everyone cited in the literature and a new function, Laplacian of Gaussian(LoG) is also introduced for comparison. All tests were done on real videos with vary-ing background dynamics and results were analyzed both qualitatively and quantitatively. Effect of different bandwidths was also investigated.展开更多
Logistic regression is often used to solve linear binary classification problems such as machine vision,speech recognition,and handwriting recognition.However,it usually fails to solve certain nonlinear multi-classifi...Logistic regression is often used to solve linear binary classification problems such as machine vision,speech recognition,and handwriting recognition.However,it usually fails to solve certain nonlinear multi-classification problem,such as problem with non-equilibrium samples.Many scholars have proposed some methods,such as neural network,least square support vector machine,AdaBoost meta-algorithm,etc.These methods essentially belong to machine learning categories.In this work,based on the probability theory and statistical principle,we propose an improved logistic regression algorithm based on kernel density estimation for solving nonlinear multi-classification.We have compared our approach with other methods using non-equilibrium samples,the results show that our approach guarantees sample integrity and achieves superior classification.展开更多
In this paper, the normal approximation rate and the random weighting approximation rate of error distribution of the kernel estimator of conditional density function f(y|x) are studied. The results may be used to...In this paper, the normal approximation rate and the random weighting approximation rate of error distribution of the kernel estimator of conditional density function f(y|x) are studied. The results may be used to construct the confidence interval of f(y|x) .展开更多
This paper introduces some concepts such as q- process in random environment, Laplace transformation, ergodic potential kernel, error function and some basic lemmas.We study the continuity and Laplace transformation o...This paper introduces some concepts such as q- process in random environment, Laplace transformation, ergodic potential kernel, error function and some basic lemmas.We study the continuity and Laplace transformation of random transition function. Finally, we give the sufficient condition for the existence of ergodic potential kernel for homogeneous q- processes in random environments.展开更多
Traditional methods for early warning of dam displacements usually assume that residual displacements follow a normal distribution.This assumption deviates from the reality,thereby affecting the reliability of early w...Traditional methods for early warning of dam displacements usually assume that residual displacements follow a normal distribution.This assumption deviates from the reality,thereby affecting the reliability of early warning results and leading to misjudgments of dam displacement behavior.To solve this problem,this study proposed an early warning method using a non-normal distribution function.A new early warning index was developed using cumulative distribution function(CDF)values.The method of kernel density estimation was used to calculate the CDF values of residual displacements at a single point.The copula function was used to compute the CDF values of residual displacements at multiple points.Numerical results showed that,with residual displacements in a non-normal distribution,the early warning method proposed in this study accurately reflected the dam displacement behavior and effectively reduced the frequency of false alarms.This method is expected to aid in the safe operation of dams.展开更多
Reliability analysis is the key to evaluate software’s quality. Since the early 1970s, the Power Law Process, among others, has been used to assess the rate of change of software reliability as time-varying function ...Reliability analysis is the key to evaluate software’s quality. Since the early 1970s, the Power Law Process, among others, has been used to assess the rate of change of software reliability as time-varying function by using its intensity function. The Bayesian analysis applicability to the Power Law Process is justified using real software failure times. The choice of a loss function is an important entity of the Bayesian settings. The analytical estimate of likelihood-based Bayesian reliability estimates of the Power Law Process under the squared error and Higgins-Tsokos loss functions were obtained for different prior knowledge of its key parameter. As a result of a simulation analysis and using real data, the Bayesian reliability estimate under the Higgins-Tsokos loss function not only is robust as the Bayesian reliability estimate under the squared error loss function but also performed better, where both are superior to the maximum likelihood reliability estimate. A sensitivity analysis resulted in the Bayesian estimate of the reliability function being sensitive to the prior, whether parametric or non-parametric, and to the loss function. An interactive user interface application was additionally developed using Wolfram language to compute and visualize the Bayesian and maximum likelihood estimates of the intensity and reliability functions of the Power Law Process for a given data.展开更多
基金Beijing Municipal Social Science Foundation(22GLC062).
文摘As a key node for the surrounding area,metro stations are closely connected with the surrounding urban space and develop cooperatively.Different types of metro stations have differences in land use and functional positioning.This paper mainly used the methods of Tyson polygon,kernel density analysis and correlation analysis,based on POI data,to classify the stations of Beijing Metro Line 7.This paper made a detailed analysis of commercial subway stations,and analyzed the distribution characteristics of commercial metro stations on Line 7.
文摘Let {Xn, n≥1} be a strictly stationary sequence of random variables, which are either associated or negatively associated, f(.) be their common density. In this paper, the author shows a central limit theorem for a kernel estimate of f(.) under certain regular conditions.
文摘There have been vast amount of studies on background modeling to detect moving objects. Two recent reviews[1,2] showed that kernel density estimation(KDE) method and Gaussian mixture model(GMM) perform about equally best among possible background models. For KDE, the selection of kernel functions and their bandwidths greatly influence the performance. There were few attempts to compare the adequacy of functions for KDE. In this paper, we evaluate the performance of various functions for KDE. Functions tested include almost everyone cited in the literature and a new function, Laplacian of Gaussian(LoG) is also introduced for comparison. All tests were done on real videos with vary-ing background dynamics and results were analyzed both qualitatively and quantitatively. Effect of different bandwidths was also investigated.
基金The authors would like to thank all anonymous reviewers for their suggestions and feedback.This work was supported by National Natural Science Foundation of China(Grant No.61379103).
文摘Logistic regression is often used to solve linear binary classification problems such as machine vision,speech recognition,and handwriting recognition.However,it usually fails to solve certain nonlinear multi-classification problem,such as problem with non-equilibrium samples.Many scholars have proposed some methods,such as neural network,least square support vector machine,AdaBoost meta-algorithm,etc.These methods essentially belong to machine learning categories.In this work,based on the probability theory and statistical principle,we propose an improved logistic regression algorithm based on kernel density estimation for solving nonlinear multi-classification.We have compared our approach with other methods using non-equilibrium samples,the results show that our approach guarantees sample integrity and achieves superior classification.
基金Supported by Natural Science Foundation of Beijing City and National Natural Science Foundation ofChina(2 2 30 4 1 0 0 1 30 1
文摘In this paper, the normal approximation rate and the random weighting approximation rate of error distribution of the kernel estimator of conditional density function f(y|x) are studied. The results may be used to construct the confidence interval of f(y|x) .
基金Supported by the National Natural Science Foundation of China (10371092)
文摘This paper introduces some concepts such as q- process in random environment, Laplace transformation, ergodic potential kernel, error function and some basic lemmas.We study the continuity and Laplace transformation of random transition function. Finally, we give the sufficient condition for the existence of ergodic potential kernel for homogeneous q- processes in random environments.
基金supported by the National Natural Science Foundation of China(Grant No.52109156)the Science and Technology Project of the Jiangxi Provincial Education Department(Grant No.GJJ190970).
文摘Traditional methods for early warning of dam displacements usually assume that residual displacements follow a normal distribution.This assumption deviates from the reality,thereby affecting the reliability of early warning results and leading to misjudgments of dam displacement behavior.To solve this problem,this study proposed an early warning method using a non-normal distribution function.A new early warning index was developed using cumulative distribution function(CDF)values.The method of kernel density estimation was used to calculate the CDF values of residual displacements at a single point.The copula function was used to compute the CDF values of residual displacements at multiple points.Numerical results showed that,with residual displacements in a non-normal distribution,the early warning method proposed in this study accurately reflected the dam displacement behavior and effectively reduced the frequency of false alarms.This method is expected to aid in the safe operation of dams.
文摘Reliability analysis is the key to evaluate software’s quality. Since the early 1970s, the Power Law Process, among others, has been used to assess the rate of change of software reliability as time-varying function by using its intensity function. The Bayesian analysis applicability to the Power Law Process is justified using real software failure times. The choice of a loss function is an important entity of the Bayesian settings. The analytical estimate of likelihood-based Bayesian reliability estimates of the Power Law Process under the squared error and Higgins-Tsokos loss functions were obtained for different prior knowledge of its key parameter. As a result of a simulation analysis and using real data, the Bayesian reliability estimate under the Higgins-Tsokos loss function not only is robust as the Bayesian reliability estimate under the squared error loss function but also performed better, where both are superior to the maximum likelihood reliability estimate. A sensitivity analysis resulted in the Bayesian estimate of the reliability function being sensitive to the prior, whether parametric or non-parametric, and to the loss function. An interactive user interface application was additionally developed using Wolfram language to compute and visualize the Bayesian and maximum likelihood estimates of the intensity and reliability functions of the Power Law Process for a given data.