摘要
针对传统深度卷积神经网络分类精度不佳,参数量巨大,难以在内存受限的设备上进行部署的问题,本文提出了一种多尺度并行融合的轻量级卷积神经网络架构PL-Net。首先,将上层输出特征图分别送入两种不同尺度的深度可分离卷积层;然后对并行输出特征信息进行交叉融合,并加入残差学习,设计了一种并行轻量型模块PL-Module;同时,为了更好地提取特征信息,利用尺度降维卷积模块SR-Module来替换传统池化层;最后将上述两个模块相互堆叠构建轻量级网络。在CIFAR10、Caltech256和101_food数据集上进行训练与测试,结果表明:与同等规模的传统CNN、MobileNet-V2网络及SqueezeNet网络相比,PL-Net在减少网络参数的同时,提升了网络的分类精度,适合在内存受限的设备上进行部署。
Aiming at the problem that the traditional deep convolutional neural network has poor classification accuracy and large amount of parameters,which is difficult to deploy in memory-constrained devices,a multi-scale parallel fusion lightweight convolutional neural network architecture PL-Net is proposed.Firstly,the upper output feature map is sent to two different scales of the depth separable convolution layer,and then the parallel output is cross-fused with the feature information,and with the residual learning,a parallel lightweight module PL-Module is designed.To better extract the feature information,the scale-dimensional reduction convolutional module(SR-Module)is proposed to replace the traditional pooling layer.Finally,the above two modules are stacked on each other to construct a lightweight network.In the experimental phase,training and testing are performed on the CIFAR10,Caltech 256 and 101_food data sets.The results show that compared with the traditional CNN,MobileNet-V2 and Squeezenet networks of the same scale,PL-Net improves the classification accuracy of the network while reducing the amount of network parameters,and is suitable for deployment on memory-constrained devices.
作者
范瑞
蒋品群
曾上游
夏海英
廖志贤
李鹏
FAN Rui;JIANG Pinqun;ZENG Shangyou;XIA Haiying;LIAO Zhixian;LI Peng(College of Electronic Engineering,Guangxi Normal University,Guilin Guangxi 541004,China)
出处
《广西师范大学学报(自然科学版)》
CAS
北大核心
2019年第3期50-59,共10页
Journal of Guangxi Normal University:Natural Science Edition
基金
国家自然科学基金(11465004,61762014)
桂林市科学研究与技术开发计划项目(20170113-4)
关键词
卷积神经网络
深度可分离卷积
残差学习
并行卷积
convolutional neural network
depthwise separable convolutions
residual learning
parallel convolution