摘要
研究了压缩最小平方回归学习算法的泛化性问题.利用随机投影、覆盖数等理论以及概率不等式得到了该学习算法的泛化误差上界.所获结果表明:压缩学习虽以增大逼近误差的方式降低样本误差,但其增量是可控的.此外,通过压缩学习,在一定程度上克服了学习过程中所出现的过拟合现象.
This paper addresses the generalization performance of compressed least-square regression learning algorithm. A generalization error bound of this algorithm is established by using the random projection and the theory of the covering number. The obtained results show that the compressed learning can reduce the sample error a.t the price of increasing the approximation error, but the increment can be controlled. In addition, by using compressed projection, the overfitting problem for the learning algorithm can be overcome to a certain extent.
出处
《数学物理学报(A辑)》
CSCD
北大核心
2014年第4期905-916,共12页
Acta Mathematica Scientia
基金
国家自然科学基金(61272023
91330118
11301494)资助
关键词
机器学习
压缩感知
回归学习算法
误差界
逼近
Machine learning
Compressed sensing
Regression learning algorithm
Error bound
Approximation