摘要
图像经过卷积神经网络中的卷积操作会使图像的尺寸缩小,经过几次卷积后图像大小会不足以支持继续训练网络模型,采用边界填充(Padding)操作,在图像外围填充数值0,再进行卷积操作,经过一次卷积后输出的特征图矩阵与输入的图像矩阵有相同的大小,解决训练深度受限的问题,使网络拥有更好的性能。但Padding操作在图像外围填充数值0,会使图像边缘信息模糊。本文提出参数化的Padding操作,将填充的数值0替换为带权重的数值,保持训练深度,保留图像的边缘信息。使用包含3个卷积层和2个全连接层的简单卷积神经网络,在Fashion-MNIST数据集上进行训练,准确率有1.52%的提升。
The convolution operation in the convolution neural network will reduce the size of the image.After several convolutions,the size of the image will not be enough to support the continuous training of the network model.By using padding operation,the value of 0 is filled in the periphery of the image,and then the convolution operation is carried out.After one convolution,the output eigengraph matrix has the same size as the input image matrix The problem of limited training depth makes the network have better performance.However,padding operation fills 0 value in the periphery of the image,which will blur the edge information of the image.In this paper,a parameterized padding operation is proposed,in which the filled value 0 is replaced by a weighted value to keep the training depth and the edge information of the image.A simple convolutional neural network with three convolution layers and two full connection layers is used to train on fashion MNIST dataset,and the accuracy rate is improved by 1.52%.
作者
刘之瑜
徐精诚
罗长银
王豪石
张淑芬
LIU Zhi-yu;XU Jing-cheng;LUO Chang-yin;WANG Hao-shi;ZHANG Shu-fen(School of science,North China University of technology,Tangshan 063210,China;Hebei Key Laboratory of Data Science and Application,Tangshan 063210,China;Tangshan Key Laboratory of Data Science,Tangshan 063210,China)
出处
《新一代信息技术》
2020年第21期7-13,共7页
New Generation of Information Technology
基金
唐山市重点研发计划项目(项目编号:18120203A)。