期刊文献+

二进神经网络的汉明图学习算法 被引量:5

The Hamming-Graph Learning Algorithm of BNN
下载PDF
导出
摘要 二进神经网络的几何学习算法 ETL 必须分隔全部真顶点集合和伪顶点集合 ,且为一种穷举的算法 .该文使用所定义的汉明图和子汉明图 ,给出了选择核心顶点的依据 ,组成和扩展子汉明图的方向和步骤 ,以及一个子汉明图可用一个隐层神经元表示的条件和权值、阈值的计算公式 .所提出的二进神经网络汉明图学习算法可用于任意布尔函数 ;无需穷举并一定收敛 ,因而是快速的 ;对文献所举实例取得了较 ETL 算法结构更为优化的三层前向网络 . In the expand-and-truncate learning algorithm (ETL), every true node set must be separated from the false node sets, and ETL is a learning algorithm of exhaustion. The Hamming-Graph Learning Algorithm (HGLA) is proposed in this paper. The n-dimension Hamming-Graph HG=<V,E> is an n-order regular graph to express n-dimension Hamming geometric space. The node-set V has 2 n nodes, and the Hamming distance between the two nodes connected by one side is equal to constant 1. If a core node V c in a HG is selected, and let its next adjacent nodes as its children, and so on, then, we get a directed HG. The sub-graph consisting of the core (and the root) node V c, and all of its true children is called the first level Sub-Hamming-Graph of HG, the SHG1. The vertices inside a SHG1 of the n-dimension HG can surely be separated from the vertices outside SHG1 by a hyperplane. Thus, it can be expressed using one hidden neuron. And if it can be expanded as large as possible, then, the structure of binary neural network can be best optimized. But, whether one SHGL can be separated from the vertices outside by a hyperplane when L>1 is not sure. Without some conditions to select the core node and to expand the SHG1, the learning algorithm of BNN will still be exhaustive. Therefore, a true node that has the most true-nodes adjacent should be selected as the core node V c. And if there are more than one such node, the one within them that has the most adjacent nodes each other should be selected. Only the true nodes all of whose parents have been within the SHGL could be expanded, and every step of the expansion should be tested using the criterion min{H y|V y∈SHGL}>max{H z|V z∈~SHGL} (formula 5). If SHGL is formed and mostly expanded according to these conditions, and if it is linear separable, it must be the biggest true nodes set in the SHGL that can be linear separated from the others. Being not exhaustive, HGLA is a fast learning algorithm. Two examples in the references are tested and compared using HGLA and ETL. For example A(A 7-Bit Binary Function), HGLA requires only 3 hidden neurons but ETL requires 7; For example B(Approximation of a Circular Region), HGLA demands only 4 hidden neurons, but ETL requires 5; and the output neuron of HGLA needs not be trained, so it is optimized.
作者 华强 郑启伦
出处 《计算机学报》 EI CSCD 北大核心 2001年第11期1150-1155,共6页 Chinese Journal of Computers
基金 国家自然科学基金 ( 697830 0 8) 广东省自然科学基金 ( 970 5 2 5 )资助
关键词 模式识别 二进神经网络 学习算法 汉明图 pattern recognition, binary neural networks, learning algorithm
  • 相关文献

参考文献1

二级参考文献8

共引文献4

同被引文献56

  • 1陆阳,韩江洪,张维勇.二进神经网络中笛卡尔球的研究[J].模式识别与人工智能,2004,17(3):368-373. 被引量:3
  • 2魏允,王守觉,王丽艳,鲁华祥.一种通用神经网络处理机设计及其VLSI集成化讨论[J].电子学报,1995,23(5):7-11. 被引量:3
  • 3R Couhrip. A CMOS Binary Pattem Classifier Baaed on Parzen's Method. IEEE Trans., 1998,9(1):2-10.
  • 4A P Almeida, J E Franca. Digitally programmable analog building blocks for the implementation of attifirial neural networks. IEEE. Trans., 1996,7(2):506-514.
  • 5Chung-Yu Wu, Jeng-Feng Lan. CMOS current-mode neural associative memory design with on-chip learning. IEEE Trans., 1996, 7(1):167-181.
  • 6J H Kim, Sung-Kwon Park. The geometrical learning of binary neural networks. IEEE Trans., 1995,6(1):237-247.
  • 7Huynh T Q, Reggia J A. Guiding hidden layer representations for improved rule extraction from neural networks. IEEE Transactions on Neural Networks, 2011, 22(2): 264-275.
  • 8Diederich J, Tickle A B, Geva S. Quo Vadis. Reliable and practical rule extraction from neural networks. In: Proceedings of the Advances in Machine Learning I, Studies in Computational Intelligence. Berlin, Heidelberg: Spring, 2010, 262: 479-490.
  • 9Setiono R. Extracting M-of-N rules from trained neural networks. IEEE Transactions on Neural Networks, 2000, 11(2): 512-519.
  • 10Bader S, Holldobler S, Mayer-Eichberger B. Extracting propositional rules from feed-forward neural networks: a new decompositional approach. In: Proceedings of the 3rd International Workshop on Neural-Symbolic Learning and Reasoning. Hyderabad, India: IJCAI, 2007. 1-6.

引证文献5

二级引证文献8

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部