期刊文献+

基于前馈神经网络的函数积分计算 被引量:2

Function integral computation based on feedforward neural networks
下载PDF
导出
摘要 由于三层前馈神经网络可以逼近任何连续函数,因此可以利用三层前馈神经网络来逼近被积函数的原函数,并计算函数的积分。对于定积分、在矩形或长方体区域上的二重积分或三重积分的计算,首先构造一个三层前馈神经网络,通过训练网络使其在积分区域上对输入量的导数值、二阶混合偏导数值或三阶混合偏导数值等于相应被积函数值,训练好的网络就可逼近被积函数的原函数。对于非矩形或非长方体区域上的二重积分或三重积分,可通过换元法将积分区域转化为矩形或长方体区域。实例分析表明该方法理论简单、思路清晰、易于实现,同时精度也能得到满足。 Due to strong capability of mapping any continuous functions, three-layered feedforward neural networks can be employed to approximate primit/ve function of integrand, and compute definite integral. In order to compute definite integral, double integral in rectangular domain, triple integral in orthogonal box domain, a feedforward neural network with three layers was constructed, and could be the substitution of primitive function of integrand by training samples to satisfy first- order partial derivative, second-order mixed partial derivative, third-order mixed partial derivative of networks with respect to its input equal to integrand value in integral domain. If integral domain is not rectangular domain or orthogonal box domain, substitution method was employed to change domain shape. Case analysis shows this method has the properties of simplicity, easy operation, and high accuracy.
作者 魏海 董梦思
出处 《计算机应用》 CSCD 北大核心 2013年第A02期83-85,共3页 journal of Computer Applications
基金 国家自然科学基金资助项目(51069003) 云南省应用基础研究基金资助项目(2010ZC048)
关键词 前馈神经网络 函数积分 偏导数 原函数 精度 feedforward neural network function integral partial derivative primitive function accuracy
  • 相关文献

参考文献10

  • 1同济大学应用数学系.高等数学[M].4版.北京:高等教育出版社,1996.
  • 2HAGEN M T, DEMUTH H B, BEALE M H. Neural network design[M]. Boston,Massachusetts: PWS Publishing Company, 1996.
  • 3TEOH E J, TAIN K C,XIANG C. Estimating the number of hiddenneurons in a feedforward network using the singular value decomposition[J]. IEEE Transactions on Neural Networks,2006,17(6): 1623.
  • 4KIM C T, LEE J J. Training two-layered feedforward networks withvariable projection method [ J]. IEEE Transactions on Neural Net-works, 2008,19(2): 371 -375.
  • 5XIANG C,DING S H Q, LEE T H. Geometrical interpretation andarchitecture selection of MLP[ J]. IEEE Transactions on Neural Net-works, 2005, 18(1):84-96.
  • 6LAGARIS I E,LIKAS A,FOTIADIS D I. Artificial neural networksfor solving ordinary and partial differential equations [ J ]. IEEETransactions on Neural Networks, 1998, 9(5): 987 - 1000.
  • 7LAGARIS I E,UKAS A C, PAPAGEORGIOU D G. Neural-netwoikmethods for boundary value problems with irregular boundaries [ J ].IEEE Transactions on Neural NetwoHcs, 2000, 11(5): 1041 —1049.
  • 8FIL1CI C. On a neural approximator to 0DEs[ J]. IEEE Transac-tions on Neural Networks, 2008, 19(3): 530 -543.
  • 9MEHRKANOON S, FALCK T. Approximate solutions to ordinary diiFer-ential equations using least squares support vector machines[ J]. IEEETransactions on Neural Networks, 2012, 23(9): 1356 -1367.
  • 10SEMAN N, BAKER Z A, BAKER N A. The optimization of artifi-cial neural networks connection weights using genetic algorithms forisolated spoken Malay parliamentary speeches[ C]// Proceedings ofthe 2010 International Conference on Computer and Information Ap-plication. Piscataway: IEEE Press, 2010: 162 -166.

同被引文献19

引证文献2

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部