期刊文献+

基于动态参数的函数空间学习最优核映射

Learning optimal kernel mapping based on function space with dynamic parameters
下载PDF
导出
摘要 核函数方法可挖掘出高精度快速印刷品图像间的非线性分布规律,而挖掘能力由所选择的核函数及其参数来决定。这两者的学习与选择同样是核函数理论继续发展与实际应用需要迫切解决的问题。针对印刷品智能检测这一特定背景,提出了一种新的基于优化问题的从具有动态参数的函数空间中学习核函数及参数的方法,以此来使核函数方法达到最优性能。与传统的计算方法不同之处在于其核函数空间中的核参数是连续变化的,这使学习的范围得到一个维度上的扩展。实验结果显示,结合理论分析的迭代算法仅需要10次迭代便可得到统计最优的核函数及参数,利用学习到的核函数计算的复原误差是统计最小的。 The kernel function methods can discover the nonlinear distribution rules among the images of high precision prints. And the mining capacity is decided by the kernel function and its parameters. Selecting the kernel function is imminent to the development and application in kernel function theory. Based on the intelligent detection of prints, a new learning kernel method based on the optimization was presented for the industry of high precision printing to make the kernel function method to achieve optimal performance. Unlike the traditional calculation method, the kernel's parameter was continuously changing in kernel space, which meant that the learning scope expanded one dimension. The experimental results show that the iterative algorithm based on the theoretical analysis only needs ten iterations to get the statistical optimal kernel function and its parameters, and the recovery error of the kernel function is statistically minimum.
出处 《计算机应用》 CSCD 北大核心 2013年第8期2337-2340,共4页 journal of Computer Applications
基金 国家973计划项目(2011CB302402) 国家自然科学基金资助项目(11171053)
关键词 核方法 优化问题 缺陷检测 核主成分分析 图像复原 kernel method optimization problem defect detection Kernel Principal Component Analysis (KPCA) image reconstruction
  • 相关文献

参考文献15

  • 1BARTLETY P, SHAWE-TAYLOR J. Generalization performance of support vector machines and other pattern classifiers [ M]// Ad- vances in Kernel Methods - Support Vector Learning. Cambridge: MIT Press, 1999:43 -54.
  • 2LANCKRIET G R G, CRISTIANINI N, BARTLETr P, et al. Learn- ing the kernel matrix with semi-definite programming[ J]. Journal of Machine Learning Research, 2004, 5( 1): 27 -72.
  • 3MICCHELLI C A, PONTIL M. On learning vector-valued functions [J]. Neural Computation, 2005, 17(1): 177-204.
  • 4CORTES C, MOHRI M, ROSTAMIZADEH A. L2 regularization for learning kernels [ C]// UAI '09: Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence. Arlington, Vir-ginia: AUAI Press, 2009:109 - 116.
  • 5CORTES C, MOHRI M, ROSTAMIZADEH A. Learning non-linear combinations of kernels [ C]// NIPS 2009: Proceedings of the Twenty-Third Annual Conference on Neural Information Processing Systems. [ S. 1. ] : NIPS Foundation, 2009:396 -404.
  • 6ARGYRIOU A, MICCHELLI C A, PONTIL M. Learning convex combinations of continuously parameterized basic kernels [ C ]// COLT 2005: Proceedings of the 18th Annual Conference on Learn- ing Theory, LNCS 3559. Berlin: Springer, 2005:338 -352. sing the kernel methods [ J]. Advances in Information Seiences and Service Sciences, 2012, 4(22): 612-618.
  • 7ONG C S, SMOLA A J, WILLIAMSON R C. Learning the kernel with hyperkernels[ J]. The Journal of Machine Learning Research, 2005, 6:1043 - 1071.
  • 8SONNENBURG S, RATSCH G, SCHAFER C, et al. Large scale multiple kernel learning[ J]. The Journal of Machine Learning Re- search, 2006, 7:1531-1565.
  • 9ZIEN A, ONG C S. Muhiclass multiple kernel learning[ C]//ICML 2007: Proceedings of the 24th Annual International Conference on Machine Learning. New York: ACM, 2007:1191 -1198.
  • 10RAKOTOMAMONJY A, BACH F R, CANU S, et al. SimpleMKL [ J]. The Journal of Machine Learning Research, 2008, 9:2491 - 2521.

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部