期刊文献+

用于大工业过程建模的新型小波神经网络结构 被引量:1

New architecture of wavelet neural network for modeling large-scale industrial processes
下载PDF
导出
摘要 提出一种新的小波神经网络结构 ,旨在解决输入变量比较多、变量分先后次序起作用的一类问题。该网络结构类似于多层前向神经网络 ,不同的是将一部分输入节点移至隐层 ,输入变量不是由同一层输入 ,而是根据变量起作用的前后次序分别在网络的不同层输入 ,从而使网络的规模减小 ;同时 ,隐层神经元的激励函数是一维小波函数 ,避免了多元小波函数带来的维数灾难问题。因此 ,该神经网络是处理高维问题的有效工具 ,尤其适用于包含多道加工工序的大工业过程的建模。将该神经网络用于热连轧产品质量建模 ,并经过了实测数据拟合与检验。试验结果表明 ,提出的小波神经网络结构是可行的 ,而且有很好的应用前景。 A new architecture based on wavelets and neural networks is presented aiming to handle a class of problems in which the input variables are very large and do not occur simultaneously. The architecture is similar to that of a feed-forward neural network, except that here some input nodes are moved to hidden layers. The input variables are not input in one layer, but in different layers according to their action sequences; the activation function of the hidden nodes is replaced by an one-dimension wavelet function,which can avoid the curse of dimensionality brought about by multi-wavelet functions. Thus the proposed neural network is a powerful tool for solving high-dimension problems, especially for modeling large-scale industrial processes with several work procedures. The developed methodology is tested by modeling the product quality of hot rolling mill with engineering data. Experimental results show that the proposed method is suitable for such problems and has good prospects.
出处 《系统工程与电子技术》 EI CSCD 北大核心 2004年第7期941-944,共4页 Systems Engineering and Electronics
基金 国家自然科学基金 ( 60 2 740 5 5 ) 西安交通大学自然科学基金 ( 0 90 0 -5 73 0 2 4)资助课题
关键词 小波神经网络 高维输入 多输入层 加工工序 大工业过程 质量模型 热连轧机 wavelet neural network high-dimension input multi-input layer work procedure large-scale industrial process quality model hot rolling mill
  • 相关文献

参考文献6

二级参考文献19

  • 1邢进生.基于KDD技术的大型多辊热连轧机产品质量建模[M].西安:西安交通大学,2000..
  • 2邢进生.[D].西安:西安交通大学,2000.
  • 3Horinik K, Stinchcombe M, White H. Multilayer feedforward networks are universal approximators[J]. Neural Networks, 1989, 2(5): 359--366.
  • 4Willamson R C, Helmke U. Existence and uniqueness results for neural network approximation[J]. IEEE Trans on Neural Networks, 1995, 6(1):2--13.
  • 5Chen T, Chen H. Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems[J]. IEEE Trans on Neural Networks, 1995, 6(4):911--917.
  • 6Anderssen R S, Bloomfield P. Properties of the random search in global optimization[J]. Journal of Optimization Theory and Applications, 1975, 16 : 383-- 398.
  • 7David J J, Frenzel J F. Training product unit neural networks with genetic algorithms[J]. IEEE Expert, 1993, 8(5) :26--33.
  • 8Kirkpatrick S, Gelatt C D, Vecchi M P. Optimization by simulated annealing[J].Science, 1983, 220(4598): 671--680.
  • 9Babe N, Mogami Y, Kohzake M, Shiraishi Y, Yoshida Y. A hybrid algorithms for finding the global minimum of error function of neural networks and its applications[J]. Neural Networks, 1994, 7(8): 1253--1265.
  • 10Nelder J A, Mead R. A simplex method for function minimization[J]. The Computer Journal, 1965, 7:308--313.

共引文献77

同被引文献5

引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部