摘要
核化极速神经网络KELM(Kernel Extreme Learning Machine)将ELM(Extreme Learning Machine)推广到核方法框架下,取得了更好的稳定性和泛化性.但KELM的训练时间O(n2 m+n3+ns)≈O(n3),以样本数n的3次幂急剧膨胀(n为样本数,m为特征维度,s为输出节点个数),不适合处理大样本数据(n20 000为基准).为此作者提出一种KELM加速计算框架,并在该框架下结合Nystrm近似低秩分解实现一种快速算法NKELM(Nystrm Kernel Extreme Learning Machine).NKELM的训练时间O(nmL+mL2+L3+nLs)≈O(n),只是n的一次幂(L为隐含层节点数,通常Ln),远远低于KELM的训练时间,适合处理大样本数据.实验表明,NKELM在大样本数据上具有极快的学习速度,同时产生良好的泛化性能.
Kernel Extreme Learning Machine(KELM)generalizes basic Extreme Learning Machine(ELM)to the kernel-based framework,and produces better generalization than ELM.But its time O(n2 m+n3+ns)≈O(n3)(where n is the number of training sets,mis the number of dimensions and s is the number of output nodes)increases polynomial with respect to the data size,and thus unsuitable for large-scale problems(n=20 000).Here we will propose an accelerated framework for KELM,and then implement an effective algorithm named Nystrm Kernel Extreme Learning Machine(NKELM)based on Nystrm low-rank decomposition under the framework.The time cost of NKELM O(nmL+mL2+L3+nLs)≈O(n)(Lis the number of hidden nodes,and Lnin common cases)is significantly lower than KELM,and very suitable for large-scale problems.The experimental results on large-scale datasets show that NKELM can produce good generalization performance with fast learning speed.
出处
《计算机学报》
EI
CSCD
北大核心
2014年第11期2235-2246,共12页
Chinese Journal of Computers
基金
国家自然科学基金(61100166
61202184)
国家自然科学基金重点与重大项目(91118005
91218301)
创新群体项目(61221063)
陕西省科技新星项目(2013KJXX-29)
陕西省自然科学基金(2012JM8022)
陕西省普通高等学校重点学科专项资金建设项目(2012XKJS-A016)
西安邮电大学西邮新星团队项目(2014XYXX-03)资助~~
关键词
极速神经网络
随机采样
低秩分解
核方法
extreme learning machine
Nystrom sampling
low rank decomposition
kernel method