期刊文献+

结合遗传算法的优化卷积神经网络学习方法 被引量:11

Optimal convolutional neural networks learning method combined with genetic algorithm
下载PDF
导出
摘要 经典卷积神经网络学习方法采用最陡下降算法进行学习,学习性能受卷积层和全连接层的初始权重设置的影响较大。采用遗传算法生成多组初始权重,经过选择、交叉和变异操作得到最优权重;采用这些权重作为卷积神经网络的初始权重,其学习性能优于最陡下降算法随机选择的初始权重;采用遗传算法生成的多组权重训练多个卷积神经网络分类器,由其构建联合分类器进行分类,可进一步提高分类正确率。实验结果表明,与经典卷积神经网络方法以及常用的支持向量机、随机森林、后向传播神经网络和极速学习机相比,该方法的分类正确率更高。 The classic convolutional neural networks learning method uses steepest descent algorithm for learning.The learning performance is influenced largely by the initial weight setting of convolution layers and fully-connected layer.Genetic algorithm was used to generate multiple sets of initial weights,obtained by selection,crossover and mutation optimal weights.The optimal weights were obtained through selection,crossover and mutation operators.These weights were used as initial weights of convolutional neural network,which resulted in better performance than random selection of initial weights using steepest descent algorithm.Multiple convolutional neural network classifiers were trained using the multiple sets of initial weights generated using genetic algorithm,and combined classifier was then built for classification,which further improved the classification rate.Experimental results show that the proposed method has higher classification rate than the classical convolutional neural networks method and common methods such as support vector machine,random forest,back propagation neural network,and extreme learning machine.
出处 《计算机工程与设计》 北大核心 2017年第7期1945-1950,共6页 Computer Engineering and Design
基金 国家自然科学基金项目(034031122)
关键词 卷积神经网络 机器学习 目标分类 遗传算法 最陡下降算法 convolutional neural networks machine learning object classification genetic algorithm steepest descent algorithm
  • 相关文献

参考文献4

二级参考文献81

  • 1王玲,薄列峰,焦李成.密度敏感的半监督谱聚类[J].软件学报,2007,18(10):2412-2422. 被引量:94
  • 2Vapnik V N. The nature of statistical learning theory[M]. New York: Springer, 2000: 138-167.
  • 3He H B, Edwardo A. Learning from imbalanced data[J]. IEEE Trans on Knowledge and Data Engineering, 2009, 21(8): 1263-1284.
  • 4Liu X Y, Zhou Z H. Exploratory under-sampling for class- imbalance learing[J]. IEEE Trans on Systems, Man and Cybernetics, 2009, 39(2): 539-550.
  • 5Liu X Y, Zhou Z H. Training cost-sensitive neural networks with methods addressing the class imbalance problem[J]. IEEE Trans on Knowledage and Data Engineering, 2006, 18(1): 63-77.
  • 6Van H J, Khoshgoftaar T M, Napolitano A. Experimental perspectives on learning from imbalanceed data[C]. Proc of the 24th Int Conf on Machine Learning. New York: ACM, 2007: 143-146.
  • 7Weiss G M. Mining with rarity: A unifying framework[J]. ACM SIGKDD Explorations Newsletter, 2004, 6(1): 7-19.
  • 8Estabrooks A, Jo T. A mul6ple resampling method for learning from imbalanced data sets[J]. Computational Intelligence, 2004, 20(11): 18- 36.
  • 9Han H, Wang W Y, Mao B H. Borderline-SMOTE: A new over-sampling method in imbalanced data sets learning[C]. Proc of Int Conf on Intelligent Computing. Hefei, 2005: 878-887.
  • 10Akban I R, Kwek S, Japkow I. Applying support vector machines to imbalanced datasets[C]. Proc of the 15th European Conf on Machines Learning. Berlin Heidelberg: Spring-Verlag, 2004: 39-50.

共引文献866

同被引文献115

引证文献11

二级引证文献58

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部