期刊文献+

无界抽样情形下不定核的系数正则化回归

Coefficient-based regularized regression with indefinite kernels by unbounded sampling
原文传递
导出
摘要 本文讨论样本依赖空间中无界抽样情形下最小二乘损失函数的系数正则化问题.这里的学习准则与之前再生核Hilbert空间的准则有着本质差异:核除了满足连续性和有界性之外,不需要再满足对称性和正定性;正则化子是函数关于样本展开系数的l2-范数;样本输出是无界的.上述差异给误差分析增加了额外难度.本文的目的是在样本输出不满足一致有界的情形下,通过l2-经验覆盖数给出误差的集中估计(concentration estimates).通过引入一个恰当的Hilbert空间以及l2-经验覆盖数的技巧,得到了与假设空间的容量以及与回归函数的正则性有关的较满意的学习速率. We investigate the coefficient-based regularized least squares regression with unbounded sampling in a data dependent hypothesis space. The learning scheme is essentially different from the standard one in a reproducing kernel Hilbert space: we do not need the kernel to be symmetric or positive semi-definite except for continuity and boundedness, the regularizer is the l^2-norm of a function expansion involving samples and the unboudedness of the sampling output. This leads to additional difficulty in the error analysis. In this paper, the goal is to investigate some concentration estimates for the error based on/Z-empirical covering numbers without the assumption of uniform boundedness for sampling. By introducing a suitable reproducing kernel Hilbert space and applying concentration techniques with L^2-empirical covering numbers, we derive satisfactory learning rates in terms of regularity of the regression function and capacity of the hypothesis space.
作者 蔡佳 王承
出处 《中国科学:数学》 CSCD 北大核心 2013年第6期613-624,共12页 Scientia Sinica:Mathematica
基金 国家自然科学基金(批准号:11001247) 广东商学院校级科研项目(批准号:11BS11001) 惠州学院博士启动基金(批准号:0002720) 惠州大亚湾科技项目(批准号:20110103)资助项目
关键词 学习理论 最小二乘回归 再生核Hilbet空间 l2-经验覆盖数 learning theory, least squares regression, reproducing kernel Hilbert space, l^2-empirical covering number
  • 相关文献

参考文献25

  • 1Aronszajn N. Theory of reproducing kernels. Trans Amer Math Soc, 1950, 68:337-404.
  • 2Caponnetto A, De Vito E. Optimal rates for regularized least-squares algorithms. Found Comput Math, 2007, 7: 331-368.
  • 3Cucker F, Zhou D X. Learning Theory: An Approximation Theory Viewpoint. Cambridge: Cambridge University Press, 2007.
  • 4Smale S, Zhou D X. Estimating the approximation error in learning theory. Anal Appl, 2003, 1:17-41.
  • 5Smale S, Zhou D X. Online learning with Markov sampling. Anal Appl, 2009, 7:87-113.
  • 6Smale S, Zhou D X. Learning theory estimates via integral operators and their approximations. Constr Approx, 2007 26:153-172.
  • 7Sun H W, Wu Q. Least square regression with indefinite kernels and coefficient regularization. Appl Comput Harmon Anal, 2011, 30:96-109.
  • 8Shi L, Feng Y L, Zhou D X. Concentration estimates for learning with l1-regularizer and data dependent hypothesis spaces. Appl Comput Harmon Anal, 2011, 31:286-302.
  • 9Shi L. Learning theory estimate for coefficient-based regularized regression. Appl Comput Harmon Anal, 2013, 34: 252-265.
  • 10Wang C, Cai J. Coefficient regularization with moment incremental condition. Int J Wavelets Multiresolut Inf Process, in press, 2012.

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部