摘要
针对进一步提升深度神经网络训练的收敛速度问题,借鉴批规范化(Batch Normalization,BN)算法的特点,提出尺度因子正则化BN算法。通过对BN层中的可学习尺度因子γ施加L2正则化,使得γ得到衰减,进而参数的梯度上界降低,优化空间更加平滑。基于VGG16 Net与AlexNet,在cifar10、cifar100及裂缝图像数据集上进行该算法与BN算法的图像分类对比实验,结果表明该算法不仅提高了网络训练的收敛速度,而且在相同训练次数下提高了准确率。
In order to further improve the convergence speed of deep neural network training,using the characteristics of batch normalization(BN)algorithm as reference,a BN algorithm of scale factor regularization is proposed.By applying L2 regularization to the learnable scale factor γ in the BN layer,γwas attenuated,the gradient upper bound of the parameter was reduced,and the optimization space was smoother.Based on VGG16 Net and AlexNet,the image classification comparison experiments between this algorithm and the BN algorithm were carried out on the cifar10,cifar100 and crack image datasets.The results show that the proposed algorithm not only improves the convergence speed of network training,but also improves the accuracy rate at the same training times.
作者
刘向阳
汪琦
Liu Xiangyang;Wang Qi(Hohai University,Nanjing 211100,Jiangsu,China)
出处
《计算机应用与软件》
北大核心
2024年第6期243-249,共7页
Computer Applications and Software
基金
国家自然科学基金项目(61001139)
云南省重大科技专项计划项目(202002AE090010)。
关键词
批规范化
尺度因子
L2正则化
图像分类
Batch normalization
Scale factor
L2 regularization
Image classification