期刊文献+

A new learning method using prior information of neural networks

A new learning method using prior information of neural networks
原文传递
导出
摘要 In this paper, we present a new learning method using prior information for three-layered neural networks. Usually when neural networks are used for identification of systems, all of their weights are trained independently, without considering their interrelation of weight values. Thus the training results are not usually good. The reason for this is that each parameter has its influence on others during the learning. To overcome this problem, first, we give an exact mathematical equation that describes the relation between weight values given by a set of data conveying prior information. Then we present a new learning method that trains a part of the weights and calculates the others by using these exact mathematical equations. In almost all cases, this method keeps prior information given by a mathematical structure exactly during the learning. In addition, a learning method using prior information expressed by inequality is also presented. In any case, the degree of freedom of networks (the number of adjustable weights) is appropriately limited in order to speed up the learning and ensure small errors. Numerical computer simulation results are provided to support the present approaches.
出处 《Science in China(Series F)》 2004年第6期793-814,共22页 中国科学(F辑英文版)
关键词 prior information neural network learning part parameter learning exact mathematical structure. prior information, neural network learning, part parameter learning, exact mathematical structure.
  • 相关文献

参考文献20

  • 1[1]Wiilliams, P. M., Bayesian regularization and pruning using laplace prior, Neural Computation, 1995, 7: 117-143.
  • 2[2]Hasegawa, K. A., Ichioka, Y., Generalization of shift invariant neural networks: image processing of corneal endothelium, Neural Networks, 1996, 9(2): 345-356.
  • 3[3]Pao, Y. H., Igelnik, B., Leclair, S. R. et al., The ensemble approach to neural networks learning and generalization, IEEE Trans. On Neural Networks, January 1999, 10(1): 19-29.
  • 4[4]Jack, S. N., Jin Wang, Weight smoothing to improve networks generalization, IEEE Trans. on Neural Networks, September 1994, 5(5): 753-762.
  • 5[5]Rovetta, S., Ridella, S., Zunino, R., Representation and generalization properties of class entropy networks, IEEE Trans. on Neural Networks, January 1999, 10(1): 31-47.
  • 6[6]Wang Lipo, Noise injection into inputs in sparsely connected hopfieled and winner-take-all networks, IEEE Trans. on System Man and Cybernetics, 1997, 27(5): 868-870.
  • 7[7]Juhakarhunen, Jyrki Joutesnsalo, Generalization of principal component analysis optimization problem and neural networks, Neural Networks, 1995, 8(4): 549-562.
  • 8[8]Borghese, N. A., Arbib, M. A., Generalization of temporal sequences using local dynamic programming, Neural Networks, 1995, 8(1): 39-54.
  • 9[9]Goutte, C., Hansen, L. K., Regularization with a pruning prior, Neural Networks, 1997, 10(6): 1053-1059.
  • 10[10]Reed, R., Pruning Algorithms --A survey, Neural Networks, September 1993, 4(5): 740-747.

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部