期刊文献+

Weighted Learning for Feedforward Neural Networks

Weighted Learning for Feedforward Neural Networks
下载PDF
导出
摘要 In this paper, we propose two weighted learning methods for the construction of single hidden layer feedforward neural networks. Both methods incorporate weighted least squares. Our idea is to allow the training instances nearer to the query to offer bigger contributions to the estimated output. By minimizing the weighted mean square error function, optimal networks can be obtained. The results of a number of experiments demonstrate the effectiveness of our proposed methods. In this paper, we propose two weighted learning methods for the construction of single hidden layer feedforward neural networks. Both methods incorporate weighted least squares. Our idea is to allow the training instances nearer to the query to offer bigger contributions to the estimated output. By minimizing the weighted mean square error function, optimal networks can be obtained. The results of a number of experiments demonstrate the effectiveness of our proposed methods.
出处 《Journal of Electronic Science and Technology》 CAS 2014年第3期299-304,共6页 电子科技学刊(英文版)
基金 supported by the NSC under Grant No.NSC-100-2221-E-110-083-MY3 and NSC-101-2622-E-110-011-CC3 "Aim for the Top University Plan"of the National Sun-Yat-Sen University and Ministry of Education
关键词 Extreme learning machine hybrid learning instance-based learning weighted least squares Extreme learning machine,hybrid learning,instance-based learning,weighted least squares
  • 相关文献

参考文献14

  • 1D. E. Rumelhart, G. E. Hinton, and R. J. Williams, "Learning representations by back-propagating errors," Nature, vol. 323, pp. 533-536, Oct. 1986.
  • 2M. T. Hagan and M. B. Menhaj, "Training feedforward networks with the Marquardt algorithm," IEEE Tran. on NeuralNetworks, vol. 5, no. 6, pp. 989-993, 1994.
  • 3C. Charalambous, "Conjugate gradient algorithm for efficient training of artificial neural networks,"IEE Proc. G, vol. 139, no. 3, pp. 301-310, 1992.
  • 4T. P. Vogl, J. K. Mangis, A. K. Rigler, W. T. Zink, and D. L. Alkon, "Accelerating the convergence of the back- propagation method," Biological Cybernetic, vol. 59, no. 4-5, pp. 257-263, 1988.
  • 5R. A. Jacobs, "Increased rates of convergence through learning rateadaptation,"NeuralNetworks, vol. 1, no. 4, pp. 295-308,1988.
  • 6S.-Y. Cho and T. W. S. Chow, "Training multilayer neural networks using fast global learning algorithm---least-squares and penalized optimization methods," Neurocomputing, vol. 25, no. 1-3, pp. 115-131, 1999.
  • 7Q.-Y. Zhu, G.-B. Huang, and C.-K. Siew, "Extreme learning machine: Theory and applications," Neuroeomputing, vol. 70, no. 1-3, pp. 489-501, 2006.
  • 8S.-J. Lee and C.-S. Ouyang, "A neuro-fuzzy system modeling with self-constructing rule generation and hybrid SVD-based learning," IEEE Trans. on Fuzzy Systems, vol. 11, no. 3, pp. 341-353, 2003.
  • 9D. Kibler, D. W. Aha, and M. K. Albert, "Instance-based learning algorithms," Machine Learning, vol. 6, no. 1, pp. 37-66, 1991.
  • 10A. W. Moore, C. G. Atkeson, and S. Schaal, Lazy Learning, Norwell: Kluwer Academic Publishers, 1997.

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部