摘要
定义了前馈核神经网络的体系结构。从实际应用的需求出发,所定义的网络涵盖了目前多数前馈神经网络。从理论上证明了该网络的批量学习过程实际上所表达的是一种核学习机,进而证明了网络的学习仅需在最后一层实施即可,而在隐含层的参数可任意赋值。因此,该结论事实上是现有LLM及ELM的拓广。同时,发现在逼近精度要求不是太高的情况下,目前的前馈神经网络学习技术因过于繁琐而没有必要,仅需对网络最后一层进行学习即可。而前馈神经网络技术目前最前沿的应用是解决大样本及深度知识表达问题。针对这两个热点问题,分别提出了大样本下的廉价学习策略和深度知识挖掘下的灵巧学习策略。在此,作者希望该文能引起广泛讨论甚至争论。
In this paper, feedforward kernel neural networks are proposed to cover a considerably large family of existing feedforward neural networks (FNNs). For practical requirements, their hidden-layer-tuning-free learning called cheap learning algorithm (CLA) is presented to show that CLA generalizes ELM and LLM and that existing FNN learning algorithms may be unnecessary if the requirement for the approximation accuracy is not too high. This work reveals that many applications of feedforward neural networks exist in their strong capability in deep knowledge representation and learning.
出处
《江南大学学报(自然科学版)》
CAS
2013年第6期631-636,共6页
Joural of Jiangnan University (Natural Science Edition)
关键词
前馈神经网路
核学习机
深度知识
深度学习
feedforward neural networks, kernel learning machine, deep knowledge, deep learning