期刊文献+

多样性正则的神经网络训练方法探索 被引量:2

Exploring diversity regularization in neural networks
下载PDF
导出
摘要 传统神经网络训练方法通过计算输出Y和目标T之间误差,并将该误差反向传递,用以修改节点权重,并不断重复该过程直至达到预期结果.该方法在模型训练时存在收敛较慢、容易过度拟合的问题.多样性正则项(diversity regularization)最近显示出有简化模型、提高泛化能力的作用,对带有多样性正则项的神经网络训练方法进行探索,在计算目标函数时加入权重多样性的考虑,从而使得网络的内部结构减少重复.与传统神经网络训练方法——反向传播算法(back-propagation algorithm,BP)和目标差传播方法(difference target propagation,DTP)的结合与对比实验表明,带多样性正则项的训练方法具有更快的收敛速度和较低的错误率. Traditional neural network training methods usually compute the loss function between the output Y of neural network and the target T,and transfer the loss back so as to update the weight of nodes in neural network.The training method repeats the process until it achieves the desired results.This type of method has some deficiencies when training the model,such as slow convergence,easy overfitting and higher error and so on.In this paper,we propose a neural network training method with diversity regularization,which adds the influence of weight when computes the loss function,which means that not only the output but also the weight of nodes are considered.The contrast experiments with the traditional neural network methods,such as back-propagation(BP)and difference target propagation(DTP),show that training methods with diversity regularization have a faster convergence rate and lower error rate.
作者 屈伟洋 俞扬 Qu Weiyang Yu Yang(National Key Laboratory for Novel Software Technology, Naniing University, Nanjing, 210046, China)
出处 《南京大学学报(自然科学版)》 CAS CSCD 北大核心 2017年第2期340-349,共10页 Journal of Nanjing University(Natural Science)
基金 国家自然科学基金(61375061) 江苏省自然科学基金(BK20160066)
关键词 多样性正则项 前馈神经网络 反向传播算法 目标差传播算法 diversity regularization forwards neural network back-propagation difference target propagation
  • 引文网络
  • 相关文献

同被引文献22

引证文献2

二级引证文献13

;
使用帮助 返回顶部