期刊文献+

A Quantized Kernel Least Mean Square Scheme with Entropy-Guided Learning for Intelligent Data Analysis 被引量:4

A Quantized Kernel Least Mean Square Scheme with Entropy-Guided Learning for Intelligent Data Analysis
下载PDF
导出
摘要 Quantized kernel least mean square(QKLMS) algorithm is an effective nonlinear adaptive online learning algorithm with good performance in constraining the growth of network size through the use of quantization for input space. It can serve as a powerful tool to perform complex computing for network service and application. With the purpose of compressing the input to further improve learning performance, this article proposes a novel QKLMS with entropy-guided learning, called EQ-KLMS. Under the consecutive square entropy learning framework, the basic idea of entropy-guided learning technique is to measure the uncertainty of the input vectors used for QKLMS, and delete those data with larger uncertainty, which are insignificant or easy to cause learning errors. Then, the dataset is compressed. Consequently, by using square entropy, the learning performance of proposed EQ-KLMS is improved with high precision and low computational cost. The proposed EQ-KLMS is validated using a weather-related dataset, and the results demonstrate the desirable performance of our scheme. Quantized kernel least mean square(QKLMS) algorithm is an effective nonlinear adaptive online learning algorithm with good performance in constraining the growth of network size through the use of quantization for input space. It can serve as a powerful tool to perform complex computing for network service and application. With the purpose of compressing the input to further improve learning performance, this article proposes a novel QKLMS with entropy-guided learning, called EQ-KLMS. Under the consecutive square entropy learning framework, the basic idea of entropy-guided learning technique is to measure the uncertainty of the input vectors used for QKLMS, and delete those data with larger uncertainty, which are insignificant or easy to cause learning errors. Then, the dataset is compressed. Consequently, by using square entropy, the learning performance of proposed EQ-KLMS is improved with high precision and low computational cost. The proposed EQ-KLMS is validated using a weather-related dataset, and the results demonstrate the desirable performance of our scheme.
出处 《China Communications》 SCIE CSCD 2017年第7期127-136,共10页 中国通信(英文版)
基金 supported by the National Key Technologies R&D Program of China under Grant No. 2015BAK38B01 the National Natural Science Foundation of China under Grant Nos. 61174103 and 61603032 the National Key Research and Development Program of China under Grant Nos. 2016YFB0700502, 2016YFB1001404, and 2017YFB0702300 the China Postdoctoral Science Foundation under Grant No. 2016M590048 the Fundamental Research Funds for the Central Universities under Grant No. 06500025 the University of Science and Technology Beijing - Taipei University of Technology Joint Research Program under Grant No. TW201610 the Foundation from the Taipei University of Technology of Taiwan under Grant No. NTUT-USTB-105-4
关键词 最小均方算法 智能数据分析 量化 学习算法 制导 使用性能 不确定性 quantized kernel least mean square (QKLMS) consecutive square entropy data analysis
  • 相关文献

参考文献3

二级参考文献32

  • 1Vapnik V. The Nature of Statistical Learning Theory [M]. New York: Springer-Verlag, 1995: 123-167.
  • 2Pal M and Foody G M. Feature selection for classification of hyper spectral data by SVM [J]. IEEE Transactions on Geoscience and Remote Sensing, 2010, 48(5): 2297-2307.
  • 3Scholkopf B, Smola A, and Bartlet P. New support vector algorithms [J]. Neural Computation, 2000, 12(5): 1207-1245.
  • 4Scholkopf B, Platt J, Shawe-Taylor J, et al.. Estimating the support of high-dimensional distribution [J]. Neural Computation, 2001, 13(7): 1443-1471.
  • 5Tax D M J and Duin R P W. Support vector data description [J]. Machine Learning, 2004, 54(1): 45-66.
  • 6Tsang I W, Kwok J T, and Cheung P M. Core vector machines: fast SVM training on very large data sets [J]. Journal of Machine Learning Research, 2005, 6(4): 363-392.
  • 7Suykens J A and Vandewalle J. Least squares support vector machines classifiers [J]. Neural Processing Letters, 1999, 19(3): 293-300.
  • 8Mangasarian 0 and Musicant D. Lagrangian support vector machines [J]. Journal of Machine Learning Research, 2001, 1(3): 161-177.
  • 9Lee Y J and Mangasarian O. SSVM: a smooth support vector machines [J]. Computational Optimization and Applications, 2001, 20(1): 5-22.
  • 10Shivaswamy P K and Jebara T. Maximum relative margin and data-dependent regularization [J]. Yournal of Machine Learning Research, 2010, 11(2): 747-788.

共引文献5

同被引文献8

引证文献4

二级引证文献21

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部