期刊文献+

基于模糊化输入和反转提高神经网络分类性能的方法 被引量:3

METHOD FOR IMPROVING CLASSIFICATION PERFORMANCE OF NEURAL NETWORK BASED ON FUZZY INPUT AND NETWORK INVERSION
下载PDF
导出
摘要 为有效提高神经网络的分类性能,首先提出了一个可处理模糊输入的模糊神经网络结构,然后由模糊输出和非模糊目标输出定义了代价函数,推导出相应的学习算法,并对该模糊神经网络进行反转,提出了模糊化的反转算法.最后,通过计算机仿真实际的模式分类问题,验证了所提出的方法的有效性.实验结果表明,所提出的方法具有学习效率高、分类准确率高、泛化能力高的优点. In order to effectively improve the classification performance of neural network, first architecture of fuzzy neural network with fuzzy input was proposed. Next a cost function of fuzzy outputs and non-fuzzy targets was defined. Then a learning algorithm from the cost function for adjusting weights was derived. And then the fuzzy neural network was inversed and fuzzified inversion algorithm was proposed. Finally, computer simulations on real-world pattern classification problems examine the effectives of the proposed approach. The experiment results show that the proposed approach has the merits of high learning efficiency, high classification accuracy and high generalization capability.
作者 武妍 王守觉
出处 《红外与毫米波学报》 SCIE EI CAS CSCD 北大核心 2005年第1期15-18,共4页 Journal of Infrared and Millimeter Waves
基金 国家自然科学基金资助项目(60135010)
关键词 模糊神经网络 模糊化 学习算法 输入 泛化能力 模式分类 计算机仿真 代价函数 性能 优点 fuzzy neural network learning algorithm fuzzy input inversion classification
  • 相关文献

参考文献6

  • 1武妍,王守觉.一种通过反馈提高神经网络学习性能的新算法[J].计算机研究与发展,2004,41(9):1488-1492. 被引量:15
  • 2Hayashi Y, Fuzzy neural network with fuzzy signals and weights [J], International Journal of Intelligent Systems,1993, 8: 527-537.
  • 3Ishibuchi H, Nii M, Fuzzification of input vectors for improving the generalization ability of neural networks [C], In:Proceedings of the International Joint Conference on Neural Networks, Anchorage, Alaska, May 4-9, 1998,2: 1153-1158.
  • 4LI Zhen-Quan, Kecman V, Ichikawa A, Fuzzified neural network based on fuzzy number operation [J], Fuzzy Sets and Systems, 2002, 130:291-304.
  • 5Ishibuchi H, Nii M, Numerical analysis of the learning of fuzzified neural networks from fuzzy if-then rules [J] , Fuzzy Sets and Systems, 2001, 120:281-307.
  • 6Ishibuchi H, Kwon K, Tanaka H, A learning algorithm of fuzzy neural networks with triangular fuzzy weights [J],Fuzzy Sets and Systems, 1995, 71: 277-293.

二级参考文献11

  • 1E Trentin. Networks with trainable amplitude of activation functions. Neural Networks, 2001, 14(4-5): 471~493
  • 2K Eom, K Jung, H Sirisena. Performance improvement of backpropagation algorithm by automatic activation function gain tuning using fuzzy logic. Neurocomputing, 2003, 50: 439~460
  • 3A Gupta, S M Lam. Weight decay backpropagation for noisy data. Neural Networks, 1998, 11(6): 1127~1137
  • 4Y H Zweiri, J F Whidborne, L D Seneviratne. A three-term backpropagation algorithm. Neurocomputing, 2003, 50: 305~318
  • 5B L Lu, H Kita, Y Nishikawa. Inverting feedforward neural networks using linear and nonlinear programming. IEEE Trans on Neural Networks, 1999, 10(6): 1271~1290
  • 6J N Hwang, J J Choi, S Oh, et al. Query based learning applied to partially trained multilayer perceptrons. IEEE Trans on Neural Networks, 1991, 2(1): 131~136
  • 7H Ishibuchi, M Nii. Fuzzification of input vector for improving the generalization ability of neural networks. The Int'l Joint Conf on Neural Networks, Anchorage, Alaska, 1998
  • 8X H Yu, G A Chen. Efficient backpropagation learning using optimal learning rate and momentum. Neural Network, 1997, 10(3): 517~527
  • 9G D Magoulas, V P Plagiananakos, M N Vrahatis. Globally convergent algorithms with local learning rates. IEEE Trans on Neural Networks, 2002, 13(3): 774~779
  • 10J Y F Yam, T W S Chow. A weight initialization method for improving training speed in feedforward neural network. Neurocomputing, 2000, 30(1-4): 219~232

共引文献14

同被引文献94

  • 1武妍,王守觉.一种通过反馈提高神经网络学习性能的新算法[J].计算机研究与发展,2004,41(9):1488-1492. 被引量:15
  • 2阎平凡.最小描述长度与多层前馈网络设计中的一些问题[J].模式识别与人工智能,1993,6(2):143-148. 被引量:2
  • 3李爱军,罗四维,刘蕴辉,黄华.信息理论框架下的神经网络构建[J].北京交通大学学报,2005,29(2):1-6. 被引量:3
  • 4CHEN Zhe, HAYKIN S. On different facets of regularization theory [J]. Neural Computation, 2002,34(12) :2791-2846.
  • 5GEMAN S, BIENENSTOCK E, DOURSAT R. Neural networks and the bias/variance dilemma[J]. Neural Computation, 1992,4(1 ) : 1-58.
  • 6VAPNIK V, LEVIN E, CUN Y L. Measuring the VC-dimension of a learning machine[ J]. Neural Gomputation, 1994,6(5 ) :851-876.
  • 7SONTAG E D. VC dimension of neural networks[ M]//BISHOP C M. Neural networks and machine learning. Berlin: Springer, 1998: 69-95.
  • 8BARTLETT P L, WILLIAMSON R C. The VC dimension and pseudodimension of two-layer neural networks with discrete inputs [ J ]. Neural Computation, 1996,8 (3) :625-628.
  • 9ANTHONY M, BIGGS N. PAC learning and artificial neural networks [ K ]// The handbook of brain theory and neural networks. Cambridge : MIT Press, 1995.
  • 10KARPINSKI M, MACINTYRE A. Polynomial bounds for VC dimension of sigmoidal neural networks [ C ]//Proc of the 27th ACM Symposium on Theory of Computing. New York: ACM Press, 1995:200- 208.

引证文献3

二级引证文献40

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部