The interception probability of a single missile is the basis for combat plan design and weapon performance evaluation,while its influencing factors are complex and mutually coupled.Existing calculation methods have v...The interception probability of a single missile is the basis for combat plan design and weapon performance evaluation,while its influencing factors are complex and mutually coupled.Existing calculation methods have very limited analysis of the influence mechanism of influencing factors,and none of them has analyzed the influence of the guidance law.This paper considers the influencing factors of both the interceptor and the target more comprehensively.Interceptor parameters include speed,guidance law,guidance error,fuze error,and fragment killing ability,while target performance includes speed,maneuverability,and vulnerability.In this paper,an interception model is established,Monte Carlo simulation is carried out,and the influence mechanism of each factor is analyzed based on the model and simulation results.Finally,this paper proposes a classification-regression neural network to quickly estimate the interception probability based on the value of influencing factors.The proposed method reduces the interference of invalid interception data to valid data,so its prediction accuracy is significantly better than that of pure regression neural networks.展开更多
Purpose–The study aims to provide a basis for the effective use of safety-related information data and a quantitative assessment way for the occurrence probability of the safety risk such as the fatigue fracture of t...Purpose–The study aims to provide a basis for the effective use of safety-related information data and a quantitative assessment way for the occurrence probability of the safety risk such as the fatigue fracture of the key components.Design/methodology/approach–The fatigue crack growth rate is of dispersion,which is often used to accurately describe with probability density.In view of the external dispersion caused by the load,a simple and applicable probability expression of fatigue crack growth rate is adopted based on the fatigue growth theory.Considering the isolation among the pairs of crack length a and crack formation time t(a∼t data)obtained from same kind of structural parts,a statistical analysis approach of t distribution is proposed,which divides the crack length in several segments.Furthermore,according to the compatibility criterion of crack growth,that is,there is statistical development correspondence among a∼t data,the probability model of crack growth rate is established.Findings–The results show that the crack growth rate in the stable growth stage can be approximately expressed by the crack growth control curve da/dt=5 Q•a,and the probability density of the crack growth parameter Q represents the external dispersion;t follows two-parameter Weibull distribution in certain a values.Originality/value–The probability density f(Q)can be estimated by using the probability model of crack growth rate,and a calculation example shows that the estimation method is effective and practical.展开更多
In the evaluation of some simulation systems, only small samples data are gotten due to the limited conditions. In allusion to the evaluation problem of small sample data, an interval estimation approach with the impr...In the evaluation of some simulation systems, only small samples data are gotten due to the limited conditions. In allusion to the evaluation problem of small sample data, an interval estimation approach with the improved grey confidence degree is proposed.On the basis of the definition of grey distance, three kinds of definition of the grey weight for every sample element in grey estimated value are put forward, and then the improved grey confidence degree is designed. In accordance with the new concept, the grey interval estimation for small sample data is deduced. Furthermore,the bootstrap method is applied for more accurate grey confidence interval. Through resampling of the bootstrap, numerous small samples with the corresponding confidence intervals can be obtained. Then the final confidence interval is calculated from the union of these grey confidence intervals. In the end, the simulation system evaluation using the proposed method is conducted. The simulation results show that the reasonable confidence interval is acquired, which demonstrates the feasibility and effectiveness of the proposed method.展开更多
In order to improve the performance of the probability hypothesis density(PHD) algorithm based particle filter(PF) in terms of number estimation and states extraction of multiple targets, a new probability hypothesis ...In order to improve the performance of the probability hypothesis density(PHD) algorithm based particle filter(PF) in terms of number estimation and states extraction of multiple targets, a new probability hypothesis density filter algorithm based on marginalized particle and kernel density estimation is proposed, which utilizes the idea of marginalized particle filter to enhance the estimating performance of the PHD. The state variables are decomposed into linear and non-linear parts. The particle filter is adopted to predict and estimate the nonlinear states of multi-target after dimensionality reduction, while the Kalman filter is applied to estimate the linear parts under linear Gaussian condition. Embedding the information of the linear states into the estimated nonlinear states helps to reduce the estimating variance and improve the accuracy of target number estimation. The meanshift kernel density estimation, being of the inherent nature of searching peak value via an adaptive gradient ascent iteration, is introduced to cluster particles and extract target states, which is independent of the target number and can converge to the local peak position of the PHD distribution while avoiding the errors due to the inaccuracy in modeling and parameters estimation. Experiments show that the proposed algorithm can obtain higher tracking accuracy when using fewer sampling particles and is of lower computational complexity compared with the PF-PHD.展开更多
By analyzing the structures of circuits,a novel approach for signal probability estimation of very large-scale integration(VLSI)based on the improved weighted averaging algorithm(IWAA)is proposed.Considering the failu...By analyzing the structures of circuits,a novel approach for signal probability estimation of very large-scale integration(VLSI)based on the improved weighted averaging algorithm(IWAA)is proposed.Considering the failure probability of the gate,first,the first reconvergent fan-ins corresponding to the reconvergent fan-outs were identified to locate the important signal correlation nodes based on the principle of homologous signal convergence.Secondly,the reconvergent fan-in nodes of the multiple reconverging structure in the circuit were identified by the sensitization path to determine the interference sources to the signal probability calculation.Then,the weighted signal probability was calculated by combining the weighted average approach to correct the signal probability.Finally,the reconvergent fan-out was quantified by the mixed-calculation strategy of signal probability to reduce the impact of multiple reconvergent fan-outs on the accuracy.Simulation results on ISCAS85 benchmarks circuits show that the proposed method has approximate linear time-space consumption with the increase in the number of the gate,and its accuracy is 4.2%higher than that of the IWAA.展开更多
Location based social networks( LBSNs) provide location specific data generated from smart phone into online social networks thus people can share their points of interest( POIs). POI collections are complex and c...Location based social networks( LBSNs) provide location specific data generated from smart phone into online social networks thus people can share their points of interest( POIs). POI collections are complex and can be influenced by various factors,such as user preferences,social relationships and geographical influence. Therefore,recommending new locations in LBSNs requires to take all these factors into consideration. However,one problem is how to determine optimal weights of influencing factors in an algorithm in which these factors are combined. The user similarity can be obtained from the user check-in data,or from the user friend information,or based on the different geographical influences on each user's check-in activities. In this paper,we propose an algorithm that calculates the user similarity based on check-in records and social relationships,using a proposed weighting function to adjust the weights of these two kinds of similarities based on the geographical distance between users. In addition,a non-parametric density estimation method is applied to predict the unique geographical influence on each user by getting the density probability plot of the distance between every pair of user's check-in locations. Experimental results,using foursquare datasets,have shown that comparisons between the proposed algorithm and the other five baseline recommendation algorithms in LBSNs demonstrate that our proposed algorithm is superior in accuracy and recall,furthermore solving the sparsity problem.展开更多
A new congestion driven placement Algorithm is described based on the cell inflation. In this approach, the methods of probability estimation and star model are used to evaluate the routing of nets. Global placemen...A new congestion driven placement Algorithm is described based on the cell inflation. In this approach, the methods of probability estimation and star model are used to evaluate the routing of nets. Global placement can be done by using the algorithm of global optimization and slicing partitioning. The denotation of virtual area of cell is given to indicate not only the area of cell but also the routing demand.The virtual area of a cell is got by using the strategy of cell inflation, with which in the slicing partitioning, the routing congestion is eliminated. Further reduction in congestion is achieved by cell moving. The algorithm has been tested on a set of sample circuits from American companies, with great improvement in routablity having been obtained.展开更多
Anomaly detection based on the data collected from the supervisory control and data acquisition(SCADA)system is crucial to reduce the failure rate of wind turbines(WTs).The difficulty of this kind of methods is to dyn...Anomaly detection based on the data collected from the supervisory control and data acquisition(SCADA)system is crucial to reduce the failure rate of wind turbines(WTs).The difficulty of this kind of methods is to dynamically identify the threshold for anomaly detection under changing operating conditions.In this paper,a generalized WT anomaly detection method based on the combined probability estimation model(CPEM)is proposed.The CPEM can estimate the conditional probability density function(PDF)of the target variable given changing conditions.Its generalization and accuracy are better than those of the independent probability estimation model because it combines the advantages of various kinds of probability estimation models through linear combination.By using the CPEM,the normal operating bounds under different operating conditions can be obtained,which dynamically form the thresholds for anomaly detection.Meanwhile,with respect to the thresholds,hypothesis testing(HT)is adopted to identify the anomaly by inspecting whether the observations exceed the thresholds at a given significance level,providing sound mathematical support for anomaly detection and making the detection results more reliable.The effectiveness of the proposed method is tested by using the actual data of WTs with known faults.The results illustrate that the proposed method can detect the abnormal operating state of the gearbox and generator much more early than the system fault alarm.展开更多
<strong>Background:</strong> The Cox Proportional Hazard (Cox-PH) model has been a popularly used method for survival analysis of cancer data given the survival times as a function of covariates or risk fa...<strong>Background:</strong> The Cox Proportional Hazard (Cox-PH) model has been a popularly used method for survival analysis of cancer data given the survival times as a function of covariates or risk factors. However, it is very seldom to see the assumptions for the application of the Cox-PH model satisfied in most of the research studies, raising questions about the effectiveness, robustness, and accuracy of the model predicting the proportion of survival times. This is because the necessary assumptions in most cases are difficult to satisfy, as well as the assessment of interaction among covariates. <strong>Methods:</strong> To further improve the therapeutic/treatment strategy for cancer diseases, we proposed a new approach to survival analysis using multiple myeloma (MM) cancer data. We first developed a data-driven nonlinear statistical model that predicts the survival times with 93% accuracy. We then performed a parametric analysis on the predicted survival times to obtain the survival function which is used in estimating the proportion of survival times. <strong>Results:</strong> The new proposed approach for survival analysis has proved to be more robust and gives better estimates of the proportion of survival than the Cox-PH model. Also, satisfying the proposed model assumptions and finding interactions among risk factors is less difficult compared to the Cox-PH model. The proposed model can predict the real values of the survival times and the identified risk factors are ranked according to the percent of contribution to the survival time. <strong>Conclusion:</strong> The new proposed nonlinear statistical model approach for survival analysis of cancer diseases is very efficient and provides an improved and innovative strategy for cancer therapeutic/treatment.展开更多
In this paper,we introduce a novel approach in quantum field theories to estimate actions using artificial neural networks(ANNs).The actions are estimated by learning system configurations governed by the Boltzmann fa...In this paper,we introduce a novel approach in quantum field theories to estimate actions using artificial neural networks(ANNs).The actions are estimated by learning system configurations governed by the Boltzmann factor,e^(-s),at different temperatures within the imaginary time formalism of thermal field theory.Specifically,we focus on the 0+1 dimensional quantum field with kink/anti-kink configurations to demonstrate the feasibility of the method.Continuous-mixture autoregressive networks(CANs)enable the construction of accurate effective actions with tractable probability density estimation.Our numerical results demonstrate that this methodology not only facilitates the construction of effective actions at specified temperatures but also adeptly estimates the action at intermediate temperatures using data from both lower and higher temperature ensembles.This capability is especially valuable for detailed exploration of phase diagrams.展开更多
In Bayesian multi-target fltering,knowledge of measurement noise variance is very important.Signifcant mismatches in noise parameters will result in biased estimates.In this paper,a new particle flter for a probabilit...In Bayesian multi-target fltering,knowledge of measurement noise variance is very important.Signifcant mismatches in noise parameters will result in biased estimates.In this paper,a new particle flter for a probability hypothesis density(PHD)flter handling unknown measurement noise variances is proposed.The approach is based on marginalizing the unknown parameters out of the posterior distribution by using variational Bayesian(VB)methods.Moreover,the sequential Monte Carlo method is used to approximate the posterior intensity considering non-linear and non-Gaussian conditions.Unlike other particle flters for this challenging class of PHD flters,the proposed method can adaptively learn the unknown and time-varying noise variances while fltering.Simulation results show that the proposed method improves estimation accuracy in terms of both the number of targets and their states.展开更多
Bioinspired polarized skylight navigation,which can be used in unfamiliar territories,is an important alternative autonomous navigation technique in the absence of Global Navigation Satellite System(GNSS).However,the ...Bioinspired polarized skylight navigation,which can be used in unfamiliar territories,is an important alternative autonomous navigation technique in the absence of Global Navigation Satellite System(GNSS).However,the polarization pattern in night environment with noise effects and model uncertainties is a less explored area.Although several decades have passed since the first publication about the polarization of the moonlit night sky,the usefulness of nocturnal polarization navigation is only sporadic in previous researches.This study demonstrates that the nocturnal polarized light is capable of providing accurate and stable navigation information in dim light outdoor environment.Based on the statistical characteristics of Angle of Polarization(Ao P)error,a probability density estimation method is proposed for heading determination.To illustrate the application potentials,the simulation and outdoor experiments are performed.Resultingly,the proposed method robustly models the distribution of Ao P error and gives accurate heading estimation evaluated by Standard Deviation(STD)which is 0.32°in a clear night sky and 0.47°in a cloudy night sky.展开更多
Background:Living cells need to undergo subtle shape adaptations in response to the topography of their substrates.These shape changes are mainly determined by reorganization of their internal cytoskeleton,with a majo...Background:Living cells need to undergo subtle shape adaptations in response to the topography of their substrates.These shape changes are mainly determined by reorganization of their internal cytoskeleton,with a major contribution from filamentous(F)actin.Bundles of F-actin play a major role in determining cell shape and their interaction with substrates,either as“stress fibers,”or as our newly discovered“Concave Actin Bundles”(CABs),which mainly occur while endothelial cells wrap micro-fibers in culture.Methods:To better understand the morphology and functions of these CABs,it is necessary to recognize and analyze as many of them as possible in complex cellular ensembles,which is a demanding and time-consuming task.In this study,we present a novel algorithm to automatically recognize CABs without further human intervention.We developed and employed a multilayer perceptron artificial neural network(“the recognizer”),which was trained to identify CABs.Results:The recognizer demonstrated high overall recognition rate and reliability in both randomized training,and in subsequent testing experiments.Conclusion:It would be an effective replacement for validation by visual detection which is both tedious and inherently prone to errors.展开更多
As an emerging computing technology,approximate computing enables computing systems to utilize hardware resources efficiently.Recently,approximate arithmetic units have received extensive attention and have been emplo...As an emerging computing technology,approximate computing enables computing systems to utilize hardware resources efficiently.Recently,approximate arithmetic units have received extensive attention and have been employed as hardware modules to build approximate circuit systems,such as approximate accelerators.In order to make the approximate circuit system meet the application requirements,it is imperative to quickly estimate the error quality caused by the approximate unit,especially in the high-level synthesis of the approximate circuit.However,there are few studies in the literature on how to efficiently evaluate the errors in the approximate circuit system.Hence,this paper focuses on error evaluation techniques for circuit systems consisting of approximate adders and approximate multipliers,which are the key hardware components in fault-tolerant applications.In this paper,the characteristics of probability mass function(PMF)based estimation are analyzed initially,and then an optimization technique for PMF-based estimation is proposed with consideration of these features.Finally,experiments prove that the optimization technology can reduce the time required for PMF-based estimation and improve the estimation quality.展开更多
Supply–demand analysis is an important part of the planning of urban emergency shelters.Using Pudong New Area,Shanghai,China as an example,this study estimated daytime and nighttime population of the study area based...Supply–demand analysis is an important part of the planning of urban emergency shelters.Using Pudong New Area,Shanghai,China as an example,this study estimated daytime and nighttime population of the study area based on fine-scale land use data,census data,statistical yearbook information,and Tencent user-density big data.An exponential function-based,probability density estimation method was used to analyze the spatial supply of and demand for shelters under an earthquake scenario.The results show that even if all potential available shelters are considered,they still cannot satisfy the demand of the existing population for evacuation and sheltering,especially in the northern region of Pudong,under both the daytime and the nighttime scenarios.The proposed method can reveal the spatiotemporal imbalance between shelter supply and demand.We also conducted a preliminary location selection analysis of shelters based on the supply–demand analysis results.The location selection results demonstrate the advantage of the proposed method.It can be applied to identify the areas where the supply of shelters is seriously inadequate,and provide effective decision support for the planning of urban emergency shelters.展开更多
Classical decision tree model is one of the classical machine learning models for its simplicity and effectiveness in applications. However, compared to the DT model, probability estimation trees (PETs) give a bette...Classical decision tree model is one of the classical machine learning models for its simplicity and effectiveness in applications. However, compared to the DT model, probability estimation trees (PETs) give a better estimation on class probability. In order to get a good probability estimation, we usually need large trees which are not desirable with respect to model transparency. Linguistic decision tree (LDT) is a PET model based on label semantics. Fuzzy labels are used for building the tree and each branch is associated with a probability distribution over classes. If there is no overlap between neighboring fuzzy labels, these fuzzy labels then become discrete labels and a LDT with discrete labels becomes a special case of the PET model. In this paper, two hybrid models by combining the naive Bayes classifier and PETs are proposed in order to build a model with good performance without losing too much transparency. The first model uses naive Bayes estimation given a PET, and the second model uses a set of small-sized PETs as estimators by assuming the independence between these trees. Empirical studies on discrete and fuzzy labels show that the first model outperforms the PET model at shallow depth, and the second model is equivalent to the naive Bayes and PET.展开更多
The random forests (RF) algorithm, which combines the predictions from an ensemble of random trees, has achieved significant improvements in terms of classification accuracy. In many real-world applications, however...The random forests (RF) algorithm, which combines the predictions from an ensemble of random trees, has achieved significant improvements in terms of classification accuracy. In many real-world applications, however, ranking is often required in order to make optimal decisions. Thus, we focus our attention on the ranking performance of RF in this paper. Our experi- mental results based on the entire 36 UC Irvine Machine Learning Repository (UCI) data sets published on the main website of Weka platform show that RF doesn't perform well in ranking, and is even about the same as a single C4.4 tree. This fact raises the question of whether several improvements to RF can scale up its ranking performance. To answer this question, we single out an improved random forests (IRF) algorithm. Instead of the information gain measure and the maximum-likelihood estimate, the average gain measure and the similarity- weighted estimate are used in IRF. Our experiments show that IRF significantly outperforms all the other algorithms used to compare in terms of ranking while maintains the high classification accuracy characterizing RF.展开更多
With the rapid development of future network, there has been an explosive growth in multimedia data such as web images. Hence, an efficient image retrieval engine is necessary. Previous studies concentrate on the sing...With the rapid development of future network, there has been an explosive growth in multimedia data such as web images. Hence, an efficient image retrieval engine is necessary. Previous studies concentrate on the single concept image retrieval, which has limited practical usability. In practice, users always employ an Internet image retrieval system with multi-concept queries, but, the related existing approaches are often ineffective because the only combination of single-concept query techniques is adopted. At present semantic concept based multi-concept image retrieval is becoming an urgent issue to be solved. In this paper, a novel Multi-Concept image Retrieval Model(MCRM) based on the multi-concept detector is proposed, which takes a multi-concept as a whole and directly learns each multi-concept from the rearranged multi-concept training set. After the corresponding retrieval algorithm is presented, and the log-likelihood function of predictions is maximized by the gradient descent approach. Besides, semantic correlations among single-concepts and multiconcepts are employed to improve the retrieval performance, in which the semantic correlation probability is estimated with three correlation measures, and the visual evidence is expressed by Bayes theorem, estimated by Support Vector Machine(SVM). Experimental results on Corel and IAPR data sets show that the approach outperforms the state-of-the-arts. Furthermore, the model is beneficial for multi-concept retrieval and difficult retrieval with few relevant images.展开更多
In this paper,a critical Galton-Watson branching process with immigration Z_(n)is studied.We first obtain the convergence rate of the harmonic moment of Z_(n).Then the large deviation of S_(Z_(n)):∑_(i=1)^(Z_(n))X_(i...In this paper,a critical Galton-Watson branching process with immigration Z_(n)is studied.We first obtain the convergence rate of the harmonic moment of Z_(n).Then the large deviation of S_(Z_(n)):∑_(i=1)^(Z_(n))X_(i)is obtained,where{X_(i)}is a sequence of independent and identically distributed zero-mean random variables with the tail indexα>2.We shall see that the converging rate is determined by the immigration mean,the variance of reproducing and the tail index of X_(1)^(+),compared with the previous result for the supercritical case,where the rate depends on the Schroder constant and the tail index.展开更多
This issue of Science China Physics, Mechanics & Astronomy celebrates the Centenary of Einstein's General Theory of Rela- tivity, which changed the way humanity understood the concepts of space, time and matter. Pri...This issue of Science China Physics, Mechanics & Astronomy celebrates the Centenary of Einstein's General Theory of Rela- tivity, which changed the way humanity understood the concepts of space, time and matter. Prior to 1915 Einstein had intro- duced his theory of Special Relativity, and Minkowski had introduced the spacetime metric. General Relativity overthrew the Newtonian idea that space, time and matter were independent, replacing it with the idea that space, time and matter are inex- tricably linked. Within a year of the publication of General Relativity came Schwartzchild's exact solution of Einstein's field equations which describes the spacetime structure of black holes. In 1916 and 1918 Einstein showed that his theory predicted the existence of gravitational waves. Within 7 years, in 1922, Friedmann published a solution for Einstein's field equations applied to a homogeneous universe, uncovering the basic physics of Big Bang cosmology.展开更多
基金supported by the Foundation Strengthening Program Technology Field Foundation(2020-JCJQ-JJ-132)。
文摘The interception probability of a single missile is the basis for combat plan design and weapon performance evaluation,while its influencing factors are complex and mutually coupled.Existing calculation methods have very limited analysis of the influence mechanism of influencing factors,and none of them has analyzed the influence of the guidance law.This paper considers the influencing factors of both the interceptor and the target more comprehensively.Interceptor parameters include speed,guidance law,guidance error,fuze error,and fragment killing ability,while target performance includes speed,maneuverability,and vulnerability.In this paper,an interception model is established,Monte Carlo simulation is carried out,and the influence mechanism of each factor is analyzed based on the model and simulation results.Finally,this paper proposes a classification-regression neural network to quickly estimate the interception probability based on the value of influencing factors.The proposed method reduces the interference of invalid interception data to valid data,so its prediction accuracy is significantly better than that of pure regression neural networks.
基金This research was supported by the China National Railway Group Co.,Ltd.Research and Development Project(N2022T008).
文摘Purpose–The study aims to provide a basis for the effective use of safety-related information data and a quantitative assessment way for the occurrence probability of the safety risk such as the fatigue fracture of the key components.Design/methodology/approach–The fatigue crack growth rate is of dispersion,which is often used to accurately describe with probability density.In view of the external dispersion caused by the load,a simple and applicable probability expression of fatigue crack growth rate is adopted based on the fatigue growth theory.Considering the isolation among the pairs of crack length a and crack formation time t(a∼t data)obtained from same kind of structural parts,a statistical analysis approach of t distribution is proposed,which divides the crack length in several segments.Furthermore,according to the compatibility criterion of crack growth,that is,there is statistical development correspondence among a∼t data,the probability model of crack growth rate is established.Findings–The results show that the crack growth rate in the stable growth stage can be approximately expressed by the crack growth control curve da/dt=5 Q•a,and the probability density of the crack growth parameter Q represents the external dispersion;t follows two-parameter Weibull distribution in certain a values.Originality/value–The probability density f(Q)can be estimated by using the probability model of crack growth rate,and a calculation example shows that the estimation method is effective and practical.
文摘In the evaluation of some simulation systems, only small samples data are gotten due to the limited conditions. In allusion to the evaluation problem of small sample data, an interval estimation approach with the improved grey confidence degree is proposed.On the basis of the definition of grey distance, three kinds of definition of the grey weight for every sample element in grey estimated value are put forward, and then the improved grey confidence degree is designed. In accordance with the new concept, the grey interval estimation for small sample data is deduced. Furthermore,the bootstrap method is applied for more accurate grey confidence interval. Through resampling of the bootstrap, numerous small samples with the corresponding confidence intervals can be obtained. Then the final confidence interval is calculated from the union of these grey confidence intervals. In the end, the simulation system evaluation using the proposed method is conducted. The simulation results show that the reasonable confidence interval is acquired, which demonstrates the feasibility and effectiveness of the proposed method.
基金Project(61101185) supported by the National Natural Science Foundation of ChinaProject(2011AA1221) supported by the National High Technology Research and Development Program of China
文摘In order to improve the performance of the probability hypothesis density(PHD) algorithm based particle filter(PF) in terms of number estimation and states extraction of multiple targets, a new probability hypothesis density filter algorithm based on marginalized particle and kernel density estimation is proposed, which utilizes the idea of marginalized particle filter to enhance the estimating performance of the PHD. The state variables are decomposed into linear and non-linear parts. The particle filter is adopted to predict and estimate the nonlinear states of multi-target after dimensionality reduction, while the Kalman filter is applied to estimate the linear parts under linear Gaussian condition. Embedding the information of the linear states into the estimated nonlinear states helps to reduce the estimating variance and improve the accuracy of target number estimation. The meanshift kernel density estimation, being of the inherent nature of searching peak value via an adaptive gradient ascent iteration, is introduced to cluster particles and extract target states, which is independent of the target number and can converge to the local peak position of the PHD distribution while avoiding the errors due to the inaccuracy in modeling and parameters estimation. Experiments show that the proposed algorithm can obtain higher tracking accuracy when using fewer sampling particles and is of lower computational complexity compared with the PF-PHD.
基金The National Natural Science Foundation of China(No.61502422)the Natural Science Foundation of Zhejiang Province(No.LY18F020028,LQ15F020006)the Natural Science Foundation of Zhejiang University of Technology(No.2014XY007)
文摘By analyzing the structures of circuits,a novel approach for signal probability estimation of very large-scale integration(VLSI)based on the improved weighted averaging algorithm(IWAA)is proposed.Considering the failure probability of the gate,first,the first reconvergent fan-ins corresponding to the reconvergent fan-outs were identified to locate the important signal correlation nodes based on the principle of homologous signal convergence.Secondly,the reconvergent fan-in nodes of the multiple reconverging structure in the circuit were identified by the sensitization path to determine the interference sources to the signal probability calculation.Then,the weighted signal probability was calculated by combining the weighted average approach to correct the signal probability.Finally,the reconvergent fan-out was quantified by the mixed-calculation strategy of signal probability to reduce the impact of multiple reconvergent fan-outs on the accuracy.Simulation results on ISCAS85 benchmarks circuits show that the proposed method has approximate linear time-space consumption with the increase in the number of the gate,and its accuracy is 4.2%higher than that of the IWAA.
文摘Location based social networks( LBSNs) provide location specific data generated from smart phone into online social networks thus people can share their points of interest( POIs). POI collections are complex and can be influenced by various factors,such as user preferences,social relationships and geographical influence. Therefore,recommending new locations in LBSNs requires to take all these factors into consideration. However,one problem is how to determine optimal weights of influencing factors in an algorithm in which these factors are combined. The user similarity can be obtained from the user check-in data,or from the user friend information,or based on the different geographical influences on each user's check-in activities. In this paper,we propose an algorithm that calculates the user similarity based on check-in records and social relationships,using a proposed weighting function to adjust the weights of these two kinds of similarities based on the geographical distance between users. In addition,a non-parametric density estimation method is applied to predict the unique geographical influence on each user by getting the density probability plot of the distance between every pair of user's check-in locations. Experimental results,using foursquare datasets,have shown that comparisons between the proposed algorithm and the other five baseline recommendation algorithms in LBSNs demonstrate that our proposed algorithm is superior in accuracy and recall,furthermore solving the sparsity problem.
文摘A new congestion driven placement Algorithm is described based on the cell inflation. In this approach, the methods of probability estimation and star model are used to evaluate the routing of nets. Global placement can be done by using the algorithm of global optimization and slicing partitioning. The denotation of virtual area of cell is given to indicate not only the area of cell but also the routing demand.The virtual area of a cell is got by using the strategy of cell inflation, with which in the slicing partitioning, the routing congestion is eliminated. Further reduction in congestion is achieved by cell moving. The algorithm has been tested on a set of sample circuits from American companies, with great improvement in routablity having been obtained.
基金supported by the National Key Research and Development Program(No.2019YFE0118400)。
文摘Anomaly detection based on the data collected from the supervisory control and data acquisition(SCADA)system is crucial to reduce the failure rate of wind turbines(WTs).The difficulty of this kind of methods is to dynamically identify the threshold for anomaly detection under changing operating conditions.In this paper,a generalized WT anomaly detection method based on the combined probability estimation model(CPEM)is proposed.The CPEM can estimate the conditional probability density function(PDF)of the target variable given changing conditions.Its generalization and accuracy are better than those of the independent probability estimation model because it combines the advantages of various kinds of probability estimation models through linear combination.By using the CPEM,the normal operating bounds under different operating conditions can be obtained,which dynamically form the thresholds for anomaly detection.Meanwhile,with respect to the thresholds,hypothesis testing(HT)is adopted to identify the anomaly by inspecting whether the observations exceed the thresholds at a given significance level,providing sound mathematical support for anomaly detection and making the detection results more reliable.The effectiveness of the proposed method is tested by using the actual data of WTs with known faults.The results illustrate that the proposed method can detect the abnormal operating state of the gearbox and generator much more early than the system fault alarm.
文摘<strong>Background:</strong> The Cox Proportional Hazard (Cox-PH) model has been a popularly used method for survival analysis of cancer data given the survival times as a function of covariates or risk factors. However, it is very seldom to see the assumptions for the application of the Cox-PH model satisfied in most of the research studies, raising questions about the effectiveness, robustness, and accuracy of the model predicting the proportion of survival times. This is because the necessary assumptions in most cases are difficult to satisfy, as well as the assessment of interaction among covariates. <strong>Methods:</strong> To further improve the therapeutic/treatment strategy for cancer diseases, we proposed a new approach to survival analysis using multiple myeloma (MM) cancer data. We first developed a data-driven nonlinear statistical model that predicts the survival times with 93% accuracy. We then performed a parametric analysis on the predicted survival times to obtain the survival function which is used in estimating the proportion of survival times. <strong>Results:</strong> The new proposed approach for survival analysis has proved to be more robust and gives better estimates of the proportion of survival than the Cox-PH model. Also, satisfying the proposed model assumptions and finding interactions among risk factors is less difficult compared to the Cox-PH model. The proposed model can predict the real values of the survival times and the identified risk factors are ranked according to the percent of contribution to the survival time. <strong>Conclusion:</strong> The new proposed nonlinear statistical model approach for survival analysis of cancer diseases is very efficient and provides an improved and innovative strategy for cancer therapeutic/treatment.
基金Supported by the National Natural Science Foundation of China(12375131(YJ),12375136(LH))the CUHK-Shenzhen university development Fund(UDF01003041)the BMBF funded KISS consortium(05D23RI1)in the ErUM-Data action plan(KZ).
文摘In this paper,we introduce a novel approach in quantum field theories to estimate actions using artificial neural networks(ANNs).The actions are estimated by learning system configurations governed by the Boltzmann factor,e^(-s),at different temperatures within the imaginary time formalism of thermal field theory.Specifically,we focus on the 0+1 dimensional quantum field with kink/anti-kink configurations to demonstrate the feasibility of the method.Continuous-mixture autoregressive networks(CANs)enable the construction of accurate effective actions with tractable probability density estimation.Our numerical results demonstrate that this methodology not only facilitates the construction of effective actions at specified temperatures but also adeptly estimates the action at intermediate temperatures using data from both lower and higher temperature ensembles.This capability is especially valuable for detailed exploration of phase diagrams.
基金supported by National High-tech Research and Development Program of China (No.2011AA7014061)
文摘In Bayesian multi-target fltering,knowledge of measurement noise variance is very important.Signifcant mismatches in noise parameters will result in biased estimates.In this paper,a new particle flter for a probability hypothesis density(PHD)flter handling unknown measurement noise variances is proposed.The approach is based on marginalizing the unknown parameters out of the posterior distribution by using variational Bayesian(VB)methods.Moreover,the sequential Monte Carlo method is used to approximate the posterior intensity considering non-linear and non-Gaussian conditions.Unlike other particle flters for this challenging class of PHD flters,the proposed method can adaptively learn the unknown and time-varying noise variances while fltering.Simulation results show that the proposed method improves estimation accuracy in terms of both the number of targets and their states.
基金supported by National Natural Science Foundation of China(Nos.61627810,61751302,61833013 and 61973012)。
文摘Bioinspired polarized skylight navigation,which can be used in unfamiliar territories,is an important alternative autonomous navigation technique in the absence of Global Navigation Satellite System(GNSS).However,the polarization pattern in night environment with noise effects and model uncertainties is a less explored area.Although several decades have passed since the first publication about the polarization of the moonlit night sky,the usefulness of nocturnal polarization navigation is only sporadic in previous researches.This study demonstrates that the nocturnal polarized light is capable of providing accurate and stable navigation information in dim light outdoor environment.Based on the statistical characteristics of Angle of Polarization(Ao P)error,a probability density estimation method is proposed for heading determination.To illustrate the application potentials,the simulation and outdoor experiments are performed.Resultingly,the proposed method robustly models the distribution of Ao P error and gives accurate heading estimation evaluated by Standard Deviation(STD)which is 0.32°in a clear night sky and 0.47°in a cloudy night sky.
文摘Background:Living cells need to undergo subtle shape adaptations in response to the topography of their substrates.These shape changes are mainly determined by reorganization of their internal cytoskeleton,with a major contribution from filamentous(F)actin.Bundles of F-actin play a major role in determining cell shape and their interaction with substrates,either as“stress fibers,”or as our newly discovered“Concave Actin Bundles”(CABs),which mainly occur while endothelial cells wrap micro-fibers in culture.Methods:To better understand the morphology and functions of these CABs,it is necessary to recognize and analyze as many of them as possible in complex cellular ensembles,which is a demanding and time-consuming task.In this study,we present a novel algorithm to automatically recognize CABs without further human intervention.We developed and employed a multilayer perceptron artificial neural network(“the recognizer”),which was trained to identify CABs.Results:The recognizer demonstrated high overall recognition rate and reliability in both randomized training,and in subsequent testing experiments.Conclusion:It would be an effective replacement for validation by visual detection which is both tedious and inherently prone to errors.
基金supported by the National Natural Science Foundation of China under Grant No.62022041the Fundamental Research Funds for the Central Universities of China under Grant No.NP2022103.
文摘As an emerging computing technology,approximate computing enables computing systems to utilize hardware resources efficiently.Recently,approximate arithmetic units have received extensive attention and have been employed as hardware modules to build approximate circuit systems,such as approximate accelerators.In order to make the approximate circuit system meet the application requirements,it is imperative to quickly estimate the error quality caused by the approximate unit,especially in the high-level synthesis of the approximate circuit.However,there are few studies in the literature on how to efficiently evaluate the errors in the approximate circuit system.Hence,this paper focuses on error evaluation techniques for circuit systems consisting of approximate adders and approximate multipliers,which are the key hardware components in fault-tolerant applications.In this paper,the characteristics of probability mass function(PMF)based estimation are analyzed initially,and then an optimization technique for PMF-based estimation is proposed with consideration of these features.Finally,experiments prove that the optimization technology can reduce the time required for PMF-based estimation and improve the estimation quality.
基金funded by the National Natural Science Foundation of China(Grant Nos.41201548 and 5161101688)National Social Science Foundation of China(Grant No.18ZDA105)。
文摘Supply–demand analysis is an important part of the planning of urban emergency shelters.Using Pudong New Area,Shanghai,China as an example,this study estimated daytime and nighttime population of the study area based on fine-scale land use data,census data,statistical yearbook information,and Tencent user-density big data.An exponential function-based,probability density estimation method was used to analyze the spatial supply of and demand for shelters under an earthquake scenario.The results show that even if all potential available shelters are considered,they still cannot satisfy the demand of the existing population for evacuation and sheltering,especially in the northern region of Pudong,under both the daytime and the nighttime scenarios.The proposed method can reveal the spatiotemporal imbalance between shelter supply and demand.We also conducted a preliminary location selection analysis of shelters based on the supply–demand analysis results.The location selection results demonstrate the advantage of the proposed method.It can be applied to identify the areas where the supply of shelters is seriously inadequate,and provide effective decision support for the planning of urban emergency shelters.
文摘Classical decision tree model is one of the classical machine learning models for its simplicity and effectiveness in applications. However, compared to the DT model, probability estimation trees (PETs) give a better estimation on class probability. In order to get a good probability estimation, we usually need large trees which are not desirable with respect to model transparency. Linguistic decision tree (LDT) is a PET model based on label semantics. Fuzzy labels are used for building the tree and each branch is associated with a probability distribution over classes. If there is no overlap between neighboring fuzzy labels, these fuzzy labels then become discrete labels and a LDT with discrete labels becomes a special case of the PET model. In this paper, two hybrid models by combining the naive Bayes classifier and PETs are proposed in order to build a model with good performance without losing too much transparency. The first model uses naive Bayes estimation given a PET, and the second model uses a set of small-sized PETs as estimators by assuming the independence between these trees. Empirical studies on discrete and fuzzy labels show that the first model outperforms the PET model at shallow depth, and the second model is equivalent to the naive Bayes and PET.
文摘The random forests (RF) algorithm, which combines the predictions from an ensemble of random trees, has achieved significant improvements in terms of classification accuracy. In many real-world applications, however, ranking is often required in order to make optimal decisions. Thus, we focus our attention on the ranking performance of RF in this paper. Our experi- mental results based on the entire 36 UC Irvine Machine Learning Repository (UCI) data sets published on the main website of Weka platform show that RF doesn't perform well in ranking, and is even about the same as a single C4.4 tree. This fact raises the question of whether several improvements to RF can scale up its ranking performance. To answer this question, we single out an improved random forests (IRF) algorithm. Instead of the information gain measure and the maximum-likelihood estimate, the average gain measure and the similarity- weighted estimate are used in IRF. Our experiments show that IRF significantly outperforms all the other algorithms used to compare in terms of ranking while maintains the high classification accuracy characterizing RF.
基金supported by National Natural Science Foundation of China(Grant Nos.6137022961370178+4 种基金61272067)National Key Technology R&D Program(Grant No.2013BAH72B01)MOE-China Mobile Research Fund(Grant No.MCM20130651)the Natural Science Foundation of GDP(Grant No.S2013010015178)Science-Technology Project of GDED(Grant No.2012KJCX0037)
文摘With the rapid development of future network, there has been an explosive growth in multimedia data such as web images. Hence, an efficient image retrieval engine is necessary. Previous studies concentrate on the single concept image retrieval, which has limited practical usability. In practice, users always employ an Internet image retrieval system with multi-concept queries, but, the related existing approaches are often ineffective because the only combination of single-concept query techniques is adopted. At present semantic concept based multi-concept image retrieval is becoming an urgent issue to be solved. In this paper, a novel Multi-Concept image Retrieval Model(MCRM) based on the multi-concept detector is proposed, which takes a multi-concept as a whole and directly learns each multi-concept from the rearranged multi-concept training set. After the corresponding retrieval algorithm is presented, and the log-likelihood function of predictions is maximized by the gradient descent approach. Besides, semantic correlations among single-concepts and multiconcepts are employed to improve the retrieval performance, in which the semantic correlation probability is estimated with three correlation measures, and the visual evidence is expressed by Bayes theorem, estimated by Support Vector Machine(SVM). Experimental results on Corel and IAPR data sets show that the approach outperforms the state-of-the-arts. Furthermore, the model is beneficial for multi-concept retrieval and difficult retrieval with few relevant images.
基金supported by National Natural Science Foundation of China(Grant No.11871103)。
文摘In this paper,a critical Galton-Watson branching process with immigration Z_(n)is studied.We first obtain the convergence rate of the harmonic moment of Z_(n).Then the large deviation of S_(Z_(n)):∑_(i=1)^(Z_(n))X_(i)is obtained,where{X_(i)}is a sequence of independent and identically distributed zero-mean random variables with the tail indexα>2.We shall see that the converging rate is determined by the immigration mean,the variance of reproducing and the tail index of X_(1)^(+),compared with the previous result for the supercritical case,where the rate depends on the Schroder constant and the tail index.
文摘This issue of Science China Physics, Mechanics & Astronomy celebrates the Centenary of Einstein's General Theory of Rela- tivity, which changed the way humanity understood the concepts of space, time and matter. Prior to 1915 Einstein had intro- duced his theory of Special Relativity, and Minkowski had introduced the spacetime metric. General Relativity overthrew the Newtonian idea that space, time and matter were independent, replacing it with the idea that space, time and matter are inex- tricably linked. Within a year of the publication of General Relativity came Schwartzchild's exact solution of Einstein's field equations which describes the spacetime structure of black holes. In 1916 and 1918 Einstein showed that his theory predicted the existence of gravitational waves. Within 7 years, in 1922, Friedmann published a solution for Einstein's field equations applied to a homogeneous universe, uncovering the basic physics of Big Bang cosmology.