摘要
为了改善学习速率,提出了一种确定复数神经网络初始权值的新颖方法。初始权值不是随机给定的,而是通过计算求得。具体方法是选择一类隐层神经元的变换函数(类支集函数),将输入层和隐层之间的复数权值计算出来,保证隐层的输出矩阵是满秩矩阵,并从理论上证明了这样的满秩矩阵是存在的。利用这个满秩矩阵,通过最小平方算法就可以求得隐层和输出层之间的复数权值。将这些权值作为初始权值,采用最速下降算法来对神经网络进行训练。初始权值的优化,使得该算法可以有效地提高复数神经网络的训练速度和计算精度。一个特例是当隐层神经元的个数与样本个数相等时,就可以求得代价函数值为0的全局最小点。计算机仿真实例验证了该算法的有效性。
To improve learning speed, a novel method for properly initializing the parameters (weights) of training complex-valued neural networks is proposed. The complex-valued weights between hidden and output layers are not randomly preassigned, but calculated to guarantee that the output matrix of hidden layer is full rank by using a kind of activation functions in hidden layer. Theoretically, it is proved that the full-rank matrix of hidden layer exists. The full-rank matrix is employed to find the complex-valued weights between hidden and output layers by the least mean square algorithm. These weights are used as initialized weights, and then, the steepest descent approach is introduced for training the networks. Because the initialized weights are optimized, the training accuracy and the learning speed are improved a lot for training complex-valued neural networks. In particular, when the number of neurons in hidden layer equals the number of training patterns, the global minima with zero cost functions are obtained. Computer simulations show the good performance of the algorithm.
出处
《系统工程与电子技术》
EI
CSCD
北大核心
2006年第6期929-932,共4页
Systems Engineering and Electronics
关键词
人工智能
复数神经网络
高训练精度
快速学习
artificial intelligence
complex-valued neural networks
high training accuracy
fast learning