期刊文献+
共找到4篇文章
< 1 >
每页显示 20 50 100
非线性模型参数有偏估计的一种改进
1
作者 徐圣兵 龚力强 刘韶跃 《湘潭大学自然科学学报》 CAS CSCD 2001年第3期1-5,共5页
讨论了非线性模型y=axb的对数线性化回归方程参数的普通最小二乘估计.指出估计无效的原因是对数线性模型的随机误差具有异方差性质.运用二步加权最小二乘法改进了参数的估计.并在模拟试验中证明了其优越性.
关键词 普通最小二乘法 异方差 加权最小二乘法 二步加权最小二乘法
下载PDF
Learning rates of least-square regularized regression with polynomial kernels 被引量:3
2
作者 LI BingZheng WANG GuoMao 《Science China Mathematics》 SCIE 2009年第4期687-700,共14页
This paper presents learning rates for the least-square regularized regression algorithms with polynomial kernels. The target is the error analysis for the regression problem in learning theory. A regularization schem... This paper presents learning rates for the least-square regularized regression algorithms with polynomial kernels. The target is the error analysis for the regression problem in learning theory. A regularization scheme is given, which yields sharp learning rates. The rates depend on the dimension of polynomial space and polynomial reproducing kernel Hilbert space measured by covering numbers. Meanwhile, we also establish the direct approximation theorem by Bernstein-Durrmeyer operators in Lρ2X with Borel probability measure. 展开更多
关键词 learning theory reproducing KERNEL HILBERT space polynomial KERNEL REGULARIZATION error BERNSTEIN-DURRMEYER operators COVERING number REGULARIZATION scheme
原文传递
Generalization performance of graph-based semisupervised classification 被引量:1
3
作者 CHEN Hong LI LuoQing 《Science China Mathematics》 SCIE 2009年第11期2506-2516,共11页
Semi-supervised learning has been of growing interest over the past few years and many methods have been proposed. Although various algorithms are provided to implement semi-supervised learning,there are still gaps in... Semi-supervised learning has been of growing interest over the past few years and many methods have been proposed. Although various algorithms are provided to implement semi-supervised learning,there are still gaps in our understanding of the dependence of generalization error on the numbers of labeled and unlabeled data. In this paper,we consider a graph-based semi-supervised classification algorithm and establish its generalization error bounds. Our results show the close relations between the generalization performance and the structural invariants of data graph. 展开更多
关键词 SEMI-SUPERVISED learning GENERALIZATION error GRAPH LAPLACIAN GRAPH CUT localized ENVELOPE
原文传递
On spline approximation of sliced inverse regression
4
作者 Li-ping ZHU Zhou YU 《Science China Mathematics》 SCIE 2007年第9期1289-1302,共14页
The dimension reduction is helpful and often necessary in exploring the nonparametric regression structure.In this area,Sliced inverse regression (SIR) is a promising tool to estimate the central dimension reduction (... The dimension reduction is helpful and often necessary in exploring the nonparametric regression structure.In this area,Sliced inverse regression (SIR) is a promising tool to estimate the central dimension reduction (CDR) space.To estimate the kernel matrix of the SIR,we herein suggest the spline approximation using the least squares regression.The heteroscedasticity can be incorporated well by introducing an appropriate weight function.The root-n asymptotic normality can be achieved for a wide range choice of knots.This is essentially analogous to the kernel estimation.Moreover, we also propose a modified Bayes information criterion (BIC) based on the eigenvalues of the SIR matrix.This modified BIC can be applied to any form of the SIR and other related methods.The methodology and some of the practical issues are illustrated through the horse mussel data.Empirical studies evidence the performance of our proposed spline approximation by comparison of the existing estimators. 展开更多
关键词 ASYMPTOTIC NORMALITY SPLINE BAYES information CRITERION DIMENSION reduction sliced inverse regression structural dimensionality
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部