摘要
当前基于深度学习的有监督前景分割方法得益于大量待分割场景的标注信息,其性能大幅超越传统的无监督方法.然而,获取高精度的像素级标注需要耗费大量的人力和时间成本,这严重限制了有监督算法在无标注场景的部署应用.为解决对场景监督信息依赖的问题,设计了一种与传统的帧间差分法相融合的跨场景深度学习架构,即帧间高级特征差分算法.该算法重点围绕时域变化等跨场景共性知识的迁移,在不依赖待分割场景监督信息的前提下实现高精度分割.面向五类不同模式的困难场景开展实验,本文算法的平均F值达到0.8719,超越了当前最高性能的有监督算法FgSegNet_v2(相同的跨场景条件下)和最佳的无监督算法SemanticBS.本文算法对QVGA视频(320×240)的处理速度达到35帧/s,具有较好的实时性.
Benefiting from large amounts of ground-truths of to-be-segmented scenarios,deep-learning based and supervised foreground segmentation algorithms generally outperform conventional unsupervised methods.However,pixelwise annotation is a tedious task,especially when it comes to the annotation of foreground moving objects.This seriously limits the deployment of a supervised algorithm in a wide range of scenes without ground-truths.To address the dependence on supervised information of to-be-segmented unseen scenes,we design an inter-frame high-level feature differencing algorithm with a deep learning architecture via integrating the traditional frame differencing method.The proposed algorithm leverages the transfer of cross-scene common knowledge,such as temporal changes,so as to achieve high performance for the scene in the absence of supervised information of to-be-segmented scenes.We evaluate our method on five challenging scenes with different patterns.The average F-Measure of our algorithm is 0.8719,which surpasses the current highest-performance(supervised)algorithm(FgSegNet_v2)under the cross-scene learning condition and the best unsupervised algorithm SemanticBS.Our method which can process a QVGA(320×240)video at 35 frames per second shows favorable real-time performance.
作者
张锦
李阳
任传伦
黄炼
王帅辉
段晔鑫
潘志松
谢钧
ZHANG Jin;LI Yang;REN Chuan-lun;HUANG Lian;WANG Shuai-hui;DUAN Ye-xin;PAN Zhi-song;XIE Jun(Zhenjiang Campus,Army Military Transportation University of PLA,Zhenjiang,Jiangsu 212003,China;Command and Control Engineering College,Army Engineering University of PLA,Nanjing,Jiangsu 210007,China;North China Institute of Computer Technology,Beijing 100083,China;Shanghai Military Representative Bureau,Navy Equipment Department of PLA,Shanghai 200129,China)
出处
《电子学报》
EI
CAS
CSCD
北大核心
2021年第10期2032-2040,共9页
Acta Electronica Sinica
基金
国家自然科学基金(No.61806220)。
关键词
前景分割
迁移学习
帧间差分法
跨场景学习
深度学习
foreground segmentation
transfer learning
frame differencing algorithm
cross-scene learning
deep learni