摘要
对广义凸损失函数和变高斯核情形下正则化学习算法的泛化性能展开研究.其目标是给出学习算法泛化误差的一个较为满意上界.泛化误差可以利用正则误差和样本误差来测定.基于高斯核的特性,通过构构建一个径向基函数(简记为RBF)神经网络,给出了正则误差的上界估计,通过投影算子和再生高斯核希尔伯特空间的覆盖数给出样本误差的上界估计.所获结果表明,通过适当选取参数σ和λ,可以提高学习算法的泛化性能.
This article studies generalization performance of regularized learning algorithm with a general convex loss function and varying Gaussian kernels. Our goal is to give a sat- isfactory estimate of generalization error for the learning algorithm. The generalization error is measured by regularization error and sample error. The regularization error is estimated by constructing a radial basis function (briefly denoted by RBF) neural network in view of the special structures of Gaussian kernels. The sample error is obtained by using projection operator and covering number of reproducing kernel Hilbert spaces with Gaussian kernels. The obtained results demonstrate the learning algorithm has good generalization performance with suitable choice of ra and λ.
出处
《数学物理学报(A辑)》
CSCD
北大核心
2014年第5期1049-1060,共12页
Acta Mathematica Scientia
基金
国家自然科学基金(11301494)
浙江省自然科学基金(Q12A01026)资助
关键词
学习理论
RBF神经网络
高斯核
泛化误差
Learning theory
RBF neural network
Gaussian kernels
Generalization error.