期刊文献+

用于神经网络权值稀疏化的L_(1/2)正则化方法 被引量:6

L_(1/2) regularization methods for weights sparsification of neural networks
原文传递
导出
摘要 在保证适当学习精度前提下,神经网络的神经元个数应该尽可能少(结构稀疏化),从而降低成本,提高稳健性和推广精度.本文采用正则化方法研究前馈神经网络的结构稀疏化.除了传统的用于稀疏化的L1正则化之外,本文主要采用近几年流行的L1/2正则化.为了解决L1/2正则化算子不光滑、容易导致迭代过程振荡这一问题,本文试图在不光滑点的一个小邻域内采用磨光技巧,构造一种光滑化L1/2正则化算子,希望达到比L1正则化更高的稀疏化效率.本文综述了近年来作者在用于神经网络稀疏化的L1/2正则化的一些工作,涉及的神经网络包括BP前馈神经网络、高阶神经网络、双并行前馈神经网络,以及Takagi-Sugeno模糊模型. On the premise of appropriate learning accuracy, the number of the neurons of a neural network should be as less as possible (constructional sparsification), so as to reduce the cost, and to improve the robustness and the generalization accuracy. We study the constructional sparsification of feedforward neural networks by using regularization methods. Apart from the traditional L1/2 regularization for sparsification, we mainly use the L1/2 regularization. To remove the oscillation in the iteration process due to the nonsmoothness of the L1/2 regularizer, we propose to smooth it in a neighborhood of the nonsmooth point to get a smoothing L1/2 regularizer. By doing so, we expect to improve the efficiency of the L1/2 regularizer so as to surpass the L1 regularizer. Some of our recent works in this respect are summarized in this paper, including the works on BP feedforward neural networks, higher order neural networks, double parallel neural networks and Takagi-Sugeno fuzzy models.
作者 吴微 杨洁
出处 《中国科学:数学》 CSCD 北大核心 2015年第9期1487-1504,共18页 Scientia Sinica:Mathematica
基金 国家自然科学基金(批准号:11201051)资助项目
关键词 神经网络 稀疏化 L1/2正则化 neural network sparsification L1/2 regularization
  • 相关文献

参考文献3

二级参考文献24

  • 1叶健,葛临东,吴月娴.一种优化的RBF神经网络在调制识别中的应用[J].自动化学报,2007,33(6):652-654. 被引量:32
  • 2Bishop,C.M. Neural Networks for Pattern Recogni-tion[M].Oxford University Press,UK,1995.
  • 3Chen,S,Donoho,D.L,Saunders,M.A. Atomic decomposition by basis pursuit[J].{H}SIAM REVIEW,2001,(01):129-159.
  • 4Donoho,D.L,Huo,X. Uncertainty principles and ideal atomic decomposition[J].{H}IEEE Transactions on Information Theory,2001,(07):2845-2862.
  • 5Horata,P,Chiewchanwattana,S,Sunat,K. Robust extreme learning machine[J].{H}NEUROCOMPUTING,2013.31-44.
  • 6Hornik,K,Stinchcombe,M,White,H. Multi-layer feedforward networks are universal approxima-tors[J].Neur Networks,1989,(05):359-366.
  • 7Huang,G.B,Siew,C.K. Extreme learning machine:RBF network case[A].2004.1029-1036.
  • 8Huang,G.B,Zhu,Q.Y,Siew,C.K. Extreme learning machine:a new learning scheme of feedforward neural networks[A].2004.985-990.
  • 9Huang,G.B,Zhu,Q.Y,Siew,C.K. Extreme learning machine:theory and applications[J].{H}NEUROCOMPUTING,2006,(1-3):489-501.
  • 10Huang,G.B,Wang,D.H,Lan,Y. Extreme learning machines:a survey[J].Int J Mach Learn Cybern,2011,(02):107-122.

共引文献19

同被引文献23

引证文献6

二级引证文献9

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部