摘要
在统计推断中,稳健性是指实际问题的数据来源与我们的模型假定有偏离时,所采用算法的结果受到的扰动很小,并且保持算法的预测性能.本文将统计稳健性的研究方法引入机器学习中,分析得到近邻估计这种局部学习能够在大样本的情形下收敛到 Bayes 最优估计,同时收敛条件可说明近邻估计是稳健估计.在模拟数据和真实数据库上进行实验,结果表明在某些离群点影响模型的情况下,仍保持监督学习预测的泛化性能.
Robustness in statistical inference means that the departure of real data from an assumed sample distribution has little influence on the results of the remarkable prediction performance of the algorithm. The research methods of statistical robustness are introduced into machine learning in this paper. The nearest neighbor estimation algorithm, a kind of local learning, can converge to Bayes optimal estimation in the case of large number of samples, and meanwhile the nearest neighbor estimation algorithm is a kind of robust algorithm under the convergent condition. Finally, experimental results on synthetic and real datasets demonstrate that the generalization performance of the nearest neighbor estimation algorithm can be guaranteed when the model is affected by some outliers.
出处
《模式识别与人工智能》
EI
CSCD
北大核心
2008年第6期768-774,共7页
Pattern Recognition and Artificial Intelligence
基金
国家重点基础研究发展规划项目(No.2004CB318103)
国家自然科学基金项目(No.60573078)资助
关键词
局部学习
稳健性
噪音数据
Local Learning, Robustness, Noisy Data