期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Method to generate training samples for neural network used in target recognition
1
作者 何灏 罗庆生 +2 位作者 罗霄 徐如强 李钢 《Journal of Beijing Institute of Technology》 EI CAS 2012年第3期400-407,共8页
Training neural network to recognize targets needs a lot of samples.People usually get these samples in a non-systematic way,which can miss or overemphasize some target information.To improve this situation,a new meth... Training neural network to recognize targets needs a lot of samples.People usually get these samples in a non-systematic way,which can miss or overemphasize some target information.To improve this situation,a new method based on virtual model and invariant moments was proposed to generate training samples.The method was composed of the following steps:use computer and simulation software to build target object's virtual model and then simulate the environment,light condition,camera parameter,etc.;rotate the model by spin and nutation of inclination to get the image sequence by virtual camera;preprocess each image and transfer them into binary image;calculate the invariant moments for each image and get a vectors' sequence.The vectors' sequence which was proved to be complete became the training samples together with the target outputs.The simulated results showed that the proposed method could be used to recognize the real targets and improve the accuracy of target recognition effectively when the sampling interval was short enough and the circumstance simulation was close enough. 展开更多
关键词 pattern recognition training samples for neural network model emulation space coordinate transform invariant moments
下载PDF
Explicit bivariate rate functions for large deviations in AR(1)and MA(1)processes with Gaussian innovations
2
作者 Maicon J.Karling Artur O.Lopes Sílvia R.C.Lopes 《Probability, Uncertainty and Quantitative Risk》 2023年第2期177-212,共36页
We investigate the large deviations properties for centered stationary AR(1)and MA(1)processes with independent Gaussian innovations,by giving the explicit bivariate rate functions for the sequence of two-dimensional ... We investigate the large deviations properties for centered stationary AR(1)and MA(1)processes with independent Gaussian innovations,by giving the explicit bivariate rate functions for the sequence of two-dimensional random vectors.Via the Contraction Principle,we provide the explicit rate functions for the sample mean and the sample second moment.In the AR(1)case,we also give the explicit rate function for the sequence of two-dimensional random vectors(W_(n))n≥2=(n^(-1(∑_(k=1)^(n)X_(k),∑_(k=1)^(n)X_(k)^(2))))_(n∈N)n≥2,but we obtain an analytic rate function that gives different values for the upper and lower bounds,depending on the evaluated set and its intersection with the respective set of exposed points.A careful analysis of the properties of a certain family of Toeplitz matrices is necessary.The large deviations properties of three particular sequences of one-dimensional random variables will follow after we show how to apply a weaker version of the Contraction Principle for our setting,providing new proofs for two already known results on the explicit deviation function for the sample second moment and Yule-Walker estimators.We exhibit the properties of the large deviations of the first-order empirical autocovariance,its explicit deviation function and this is also a new result. 展开更多
关键词 Autoregressive processes Empirical autocovariance Large deviations Moving average processes sample moments Toeplitz matrices Yule-Walker estimator
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部