期刊文献+

面向目标检测的多尺度运动注意力融合算法研究 被引量:2

Research on Multi-scale Motion Attention Fusion Algorithm for Video Target Detection
下载PDF
导出
摘要 运动目标检测是视频分析领域的关键技术之一,针对目前全局运动场景下目标检测算法的局限性,该文提出一种多尺度运动注意力融合的目标检测算法,为目标检测问题提供了新思路。该算法通过时-空滤波去除运动矢量场噪声,根据运动注意力形成机理定义运动注意力模型;为提高注意力计算的准确性,定义了目标像素块的测度公式,采用D-S证据理论对多尺度空间运动注意力进行决策融合,最终获取运动目标区域位置。多个不同高清视频序列的测试结果表明,该文算法在全局运动场景中能准确对目标进行检测定位,从而有效克服了现有算法的局限性。 The detection to target in motion is a key technology in video analysis. This paper proposes a target detection algorithm based on a multi-scale motion attention analysis, which provides a new method for motion target detection under a global motion scene. Firstly, the noise of motion vector field is removed by filter, and according to the mechanism of visual attention, spatial-temporal motion attention model is built; then the trust degree of motion vector is suggested on the basis of validity analysis of motion vector, and decision fusion of multi-scale motion attention is accomplished by D-S theory for detecting the region of motion target. The test results of different videos show that the algorithm is able to detect precisely targets under a global motion scene, thus effectively overcoming the limitations of the traditional algorithms.
出处 《电子与信息学报》 EI CSCD 北大核心 2014年第5期1133-1138,共6页 Journal of Electronics & Information Technology
基金 国家自然科学基金(61001140) 陕西省教育厅产业化培育项目(2012JC19) 西安市技术转移促进工程重大项目(CX12166)资助课题
关键词 目标检测 运动注意力 融合 全局运动场景 Target detection Motion attention Fusion Global motion scene
  • 相关文献

参考文献9

  • 1Stauffer C and Grimson W E L.Adaptive background mixture models for real-time tracking[C].Proceedings IEEE Computer Society Conference on Computer Vision and Pattern Recognition,Fort Collins,America 1999,2:246-252.
  • 2Qi Bin,Ghazal Mohammed,and Amer Aishy.Robust global motion estimation oriented to video object segmentation[J].IEEE Transactions on Image Processing,2008,17(6):958-967.
  • 3Chert Yue-meng.A joint approach to global motion estimation and motion segmentation from a coarsely sampled motion vector field[J].IEEE Transactions on Circuits and Systems for Video Technology,2011,21(9):1316-1328.
  • 4Itti L and Koch C.Computational modeling of visual attention[J].Nature Reviews Neuroscience,2001,2(3):193-203.
  • 5Fang Yu-ming,Lin Wei-si,Lau Chiew Tong,et al.A visual attention model combining top-down and bottom-up mechanisms for salient object detection[C].Proceedings IEEE International Conference on Acoustics,Speech and Signal Processing,Prague,Czech Republic,2011:1293-1296.
  • 6Ozkei Motoyuki,Kashiwagi Yasuhiro,Inot Top-down visual attention control based on a paticle flit,for human-interactive robots[C].Proceedings Internatiomas Conference on Human System Interactions,Yokohama,Japse,2011:188-194.
  • 7Ma Yu-Fei,Hua Xian-Sheng,and Lu Lie.A generic franmwork of user attention model and its application in video summarization[J].IEEE Transactions on Multimedia,2005,7(5):907-919.
  • 8Han Jun-wei.Object segmentation from consumer video:a unified framework based on visual attention[J].IEEE Transactions on Consumer Electronics,2009,55(3):1597-1605.
  • 9Verri A and Pggio T.Motion field and optical flow:qualitative Properties[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,1989,11(5):490-498.

同被引文献21

  • 1代科学,李国辉.一种基于码本的监控视频运动目标检测算法[J].计算机工程,2007,33(14):27-29. 被引量:8
  • 2Joshi K A,Thakore D G. A survey on moving object de-tection and tracking in video surveillance system[J]. In- ternational Journal of Soft Computing and Engineering, 2012,2(3) :44-48.
  • 3Moscheni F, Bhattacharjee S, Kunt M. Spatio-temporal segmentation based on region merging[J]. IEEE Transac- tion on Pattern Analysis and Machine Intelligence, 1998, 20(9) :897-915.
  • 4Brox T, Bruhn A, Papenberg N,et al. High accuracy opti- cal flow estimation based on a theory for warping[A]. Proc. of 8th European Conference on Computer Vision [C]. 2004,25-36.
  • 5Toyama K, Krumm J, B.rumitt B, et al. Wallflower : princi- ples and practice of background maintenance[A]. Proc. of the Seventh IEEE International Conference on Comput- er Vision[C]. 1999,255-261.
  • 6Stauffer C,Grimson E. Learning patterns of activity using real-time tracking[A]. Proc. of IEEE Transactions on Pat- tern Analysis and Machine Intelligence [C]. 2000, 747- 757.
  • 7Olivier Barnich, Marc Van Droogenbroeck. VIBE: a pow- erful random technique to estimate the background in vid- eo sequences[A]. Proc. of International Conference on A- coustics,Speech, and Signal Processing[ C]. 2009,945- 948.
  • 8Olivier Barnich, Marc Van Droogenbroeck. ViBe: A univ- ersal background subtraction algorithm for video se- quences[J]. IEEE Transactions on Image Processing, 2011,20(6) : 1709-1724.
  • 9Kyungnam Kim, Thanarat H Chalidabhongse, David Har- wood,et al. Real-time foreground-background segmenta- tion using codebook mode[J]. Real-Time Imaging, 2005, 11(3) : 172-185.
  • 10Kyungnam Kim, Thanarat H Chalidabhongse, David Har- wood,et al. Background modeling and subtraction by co- debook construction[A]. Proc. of 2004 International Con- ference on Image Processing[C]. Singapore, 2004,3061- 3064.

引证文献2

二级引证文献7

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部