摘要
针对以往的前景检测方法对场景信息依赖较多的问题,提出了一种实时的无需迭代更新背景模型的前景检测深度学习模型ForegroundNet。ForegroundNet首先通过骨干网络从当前图像和辅助图像中提取语义特征,辅助图像为相邻的图像帧或者是自动生成的视频背景图像;然后将提取得到的特征输入到包含短连接的反卷积网络中,使得最终特征图在与输入图像具有相同的大小,并且包含不同尺度的语义及动态特征;最后使用softmax层进行二值分类,得到最终检测结果。在CDNet数据集上进行的实验结果表明,相比于当前F值为0.82的次优方法,ForegroundNet能够获得0.94的F值,具有更高的检测精度;同时ForegroundNet检测速度达到123 fps,具有良好的实时性。
Aiming at the problem that the previous foreground detection methods depend more heavily on scene information, a real-time foreground detection deep learning model ForegroundNet without iteratively updating the background model is proposed. ForegroundNet extracts semantic features from current and auxiliary images with backbone networks firstly, the auxiliary images which can be either an adjacent image frame or an automatically generated background image. These features are further fed into deconvolution network with short connections, which make the final feature maps have the same size as input images and contain semantic and motional features in different scales, finally we use softmax layer to perform a binary classification. The results on CDNet dataset show that ForegroundNet achieves better F-Measure of 0.94 compare to the 0.82 of suboptimal method. More over ForegroundNet has good real-time performance that its speed reaches 123 fps.
作者
赖少川
王佳欣
马翠霞
LAI Shao-chuan;WANG Jia-xin;MA Cui-xia(South China branch of Sinopec Sales Co.,Ltd,Guangdong Province,Guangzhou Guangdong 510000,China;Institute of Software,Chinese Academy of Sciences,Beijing 100190,China;School of Computer Science and Technology,University of Chinese Academy of Sciences,Beijing 101408,China)
出处
《图学学报》
CSCD
北大核心
2020年第3期409-416,共8页
Journal of Graphics
基金
国家自然科学基金项目(61872346)
国家重点研发计划项目(2018YFC0809303)。
关键词
前景检测
深度学习
计算机视觉
卷积神经网络
运动分割
foreground detection
deep learning
computer vision
convolution neural network
motion segmentation