摘要
二进神经网络的几何学习算法 ETL 必须分隔全部真顶点集合和伪顶点集合 ,且为一种穷举的算法 .该文使用所定义的汉明图和子汉明图 ,给出了选择核心顶点的依据 ,组成和扩展子汉明图的方向和步骤 ,以及一个子汉明图可用一个隐层神经元表示的条件和权值、阈值的计算公式 .所提出的二进神经网络汉明图学习算法可用于任意布尔函数 ;无需穷举并一定收敛 ,因而是快速的 ;对文献所举实例取得了较 ETL 算法结构更为优化的三层前向网络 .
In the expand-and-truncate learning algorithm (ETL), every true node set must be separated from the false node sets, and ETL is a learning algorithm of exhaustion. The Hamming-Graph Learning Algorithm (HGLA) is proposed in this paper. The n-dimension Hamming-Graph HG=<V,E> is an n-order regular graph to express n-dimension Hamming geometric space. The node-set V has 2 n nodes, and the Hamming distance between the two nodes connected by one side is equal to constant 1. If a core node V c in a HG is selected, and let its next adjacent nodes as its children, and so on, then, we get a directed HG. The sub-graph consisting of the core (and the root) node V c, and all of its true children is called the first level Sub-Hamming-Graph of HG, the SHG1. The vertices inside a SHG1 of the n-dimension HG can surely be separated from the vertices outside SHG1 by a hyperplane. Thus, it can be expressed using one hidden neuron. And if it can be expanded as large as possible, then, the structure of binary neural network can be best optimized. But, whether one SHGL can be separated from the vertices outside by a hyperplane when L>1 is not sure. Without some conditions to select the core node and to expand the SHG1, the learning algorithm of BNN will still be exhaustive. Therefore, a true node that has the most true-nodes adjacent should be selected as the core node V c. And if there are more than one such node, the one within them that has the most adjacent nodes each other should be selected. Only the true nodes all of whose parents have been within the SHGL could be expanded, and every step of the expansion should be tested using the criterion min{H y|V y∈SHGL}>max{H z|V z∈~SHGL} (formula 5). If SHGL is formed and mostly expanded according to these conditions, and if it is linear separable, it must be the biggest true nodes set in the SHGL that can be linear separated from the others. Being not exhaustive, HGLA is a fast learning algorithm. Two examples in the references are tested and compared using HGLA and ETL. For example A(A 7-Bit Binary Function), HGLA requires only 3 hidden neurons but ETL requires 7; For example B(Approximation of a Circular Region), HGLA demands only 4 hidden neurons, but ETL requires 5; and the output neuron of HGLA needs not be trained, so it is optimized.
出处
《计算机学报》
EI
CSCD
北大核心
2001年第11期1150-1155,共6页
Chinese Journal of Computers
基金
国家自然科学基金 ( 697830 0 8)
广东省自然科学基金 ( 970 5 2 5 )资助
关键词
模式识别
二进神经网络
学习算法
汉明图
pattern recognition, binary neural networks, learning algorithm