期刊文献+

基于ReLU稀疏性的MAXOUT卷积神经网络的数据分类算法 被引量:10

Data Classification Algorithm Based on Sparse MAXOUT Convolutional Neural Network
下载PDF
导出
摘要 卷积神经网络作为一种具有深度学习能力的人工学习网络,由于其具有权值数量少、网络模型复杂度低以及算法效率高等优点在很多领域被广泛应用,但是其表现在很大程度上依赖于激活函数的选取,而激活函数的选取又比较复杂,大都是依靠经验或者实验来选择,所以这个过程中会出现无先验知识可借鉴或者参数类型繁琐难以较快确定的情况。MAXOUT卷积神经网络的出现解决了激活函数难以选择的问题,在研究MAXOUT网络构架的基础上,针对其不稀疏的特性引入ReLU稀疏单元,提出了一种基于ReLU函数稀疏性的MAXOUT卷积神经网络,并在MINST和CIFAR 10两个数据集上分别进行了数据分类实验。实验结果表明,具有稀疏性的MAXOUT卷积神经网络的分类效果更加理想。 Convolution neural network as a kind of artificial learning network with deep learning ability,has been widely used in many fields,due to its advantages of low weight,low complexity of network model and high efficiency of algorithm.But its performance depends on the selection of the activation function,which is more complex and mostly relies on experience or experimental.But there are situations where there is no prior knowledge or the parameter is too complex to determine.MAXOUT solves the problem that activation function is difficult to choose.Based on the study of MAXOUT network architecture,a sparse MAXOUT convolutional neural network based on ReLU function is proposed and ReLU sparse unit is introduced for its non-sparse characteristics.The classification results on the databases MINST and CIFAR 10 show that the classification effect of MAXOUT with sparsity is more ideal.
作者 赵馨宇 黄福珍 周晨旭 ZHAO Xinyu;HUANG Fuzhen;ZHOU Chenxu(Shanghai University of Electric Power,Shanghai 200090,China;Shanghai Municipal Engineering Design and Research General Institute(Group)Co.Ltd.,Shanghai 201900,China)
出处 《上海电力大学学报》 CAS 2020年第3期280-284,共5页 Journal of Shanghai University of Electric Power
关键词 卷积神经网络 激活函数 稀疏性 convolutional neural network activation function sparsity
  • 相关文献

参考文献2

二级参考文献73

  • 1LECUN Y, BOTTOU L, BENGIO Y, et al. Gradient-based learning applied to document recognition [J]. Proceedings of the IEEE, 1998, 86(11): 2278-2324.
  • 2HINTON G E, OSINDERO S, TEH Y W. A fast learning algorithm for deep belief nets [J]. Neural Computation, 2006, 18(7): 1527-1554.
  • 3LEE H, GROSSE R, RANGANATH R, et al. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations [C]// ICML '09: Proceedings of the 26th Annual International Conference on Machine Learning. New York: ACM, 2009: 609-616.
  • 4HUANG G B, LEE H, ERIK G. Learning hierarchical representations for face verification with convolutional deep belief networks [C]// CVPR '12: Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition. Washington, DC: IEEE Computer Society, 2012: 2518-2525.
  • 5KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks [C]// Proceedings of Advances in Neural Information Processing Systems. Cambridge, MA: MIT Press, 2012: 1106-1114.
  • 6GIRSHICK R, DONAHUE J, DARRELL T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation [C]// Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition. Washington, DC: IEEE Computer Society, 2014: 580-587.
  • 7LONG J, SHELHAMER E, DARRELL T. Fully convolutional networks for semantic segmentation [C]// Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. Washington, DC: IEEE Computer Society, 2015: 3431-3440.
  • 8SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition [EB/OL]. [2015-11-04]. http://www.robots.ox.ac.uk:5000/~vgg/publications/2015/Simonyan15/simonyan15.pdf.
  • 9SZEGEDY C, LIU W, JIA Y, et al. Going deeper with convolutions [C]// Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. Washington, DC: IEEE Computer Society, 2015: 1-8.
  • 10HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition [EB/OL]. [2016-01-04]. https://www.researchgate.net/publication/286512696_Deep_Residual_Learning_for_Image_Recognition.

共引文献546

同被引文献82

引证文献10

二级引证文献15

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部