期刊文献+

复杂场景下自适应背景减除算法 被引量:20

Adaptive background subtraction approach of Gaussian mixture model
原文传递
导出
摘要 目的复杂场景下的背景减除是智能视频监控研究领域的研究重点和热点之一。针对混合高斯模型中高斯分布个数固定和参数初始化粗糙问题,提出一种应用于复杂场景中的基于混合高斯模型的自适应背景减除算法(AMGBS)。方法通过灰度值归类算法自适应调整模型的高斯分布个数,使得背景模型能够适应场景的变化,并且结合在线K均值(online K-means)算法和在线期望最大化(online EM)算法初始化混合高斯模型参数。结果针对灰度值统计结果调整高斯分布数,以及采用优化参数初始化过程,实验表明,本文方法的平均查准率和平均查全率比传统的混合高斯算法高出10%左右,比其他改进的混合高斯算法高出2%左右。结论提出一种新的自适应背景减除算法,针对灰度值统计结果调整高斯分布数,以及采用优化参数初始化过程。实验结果表明,该方法对复杂场景有较强的适应能力,能够有效快速地完成背景减除,进而实现运动目标的提取。 Objective Background subtraction is an important step in object detection for many computer vision applications, including intelligent surveillance and human detection. The purpose of this process is to segment moving objects from complex scenes. Performance mainly depends on the background modeling algorithm; however, the background is a complex environment that usually includes distracting motions. Thus, background subtraction is complicated, and an adaptive method is proposed to address this problem. Method The method is based on the Gaussian mixture model. In their approach, each pixel is modeled by a mixture of K Gaussian distributions. An online learning technique is employed to update background models. In their approach, online K-means is applied to initialize the parameters of the Gaussian mixture model. The number of Gaussian distributions cannot be changed in the process of detection. The initialization of the model parameters significantly influences foreground detection, and the fixed Gaussian distribution cannot accommodate the changing background. In this study, we initialize the parameters of the Gaussian mixture model by combining the online K-means and the online expectation-maximization (EM) algorithms. The outcome of online K-means algorithm is the input of the online EM algorithm. The former rapidly generates the parameter values that are close to the reasonable value, whereas the online EM rapidly and accurately astringes the result that is obtained through online K-means to a reasonable range. In addition, this paper also presents a gray-value classification algorithm to adjust the number of Gaussians to adapt to the dynamic environment. Recent statistics regarding gray value are obtained for each pixel. Then, this algorithm classifies these gray values into different categories. Finally, this method updates the number of Gaussian distributions on the basis of the number of categories. In this paper, we conduct several experiments with four video datasets to evaluate the proposed background sub- traction algorithm. Three of these datasets are standard test videos that are widely used in the video monitoring field. The videos entitled "Waving Trees" and "Bootstrapping" are derived from the Wallflower dataset. The pedestrian video is selected from the Change detection dataset. Another video is captured by a local bus station downtown. Moreover, a quantitative analysis is conducted with the general evaluation criteria of Precision and Recall. Result In the conventional Gaussian mixture model, the number of Gaussian distributions is fixed for each pixel and the system cannot adjust to the changing background. Some static regions apply only one or two Gaussian distributions, whereas moving regions need more Gaussian distributions to maintain the model. However, maintaining additional distributions consumes resources, whereas a lack of distributions may result in false detections. Conclusion Test results show that the proposed method performs better than the conventional Gaussian mixture model. Experimental findings also suggest that the proposed approach adapts more effectively to complex scenes than those presented in reviewed literature. The background can be segmented effectively and rapidly, and the results for Precision-to-Recall ratio demonstrate the superiority of the method to analogous algorithms. The proposed approach provides a new direction for research on background subtraction. In future work, we will attempt to control learning rate with effective strategies and improve Precision-to-Recall ratio.
出处 《中国图象图形学报》 CSCD 北大核心 2015年第6期756-763,共8页 Journal of Image and Graphics
基金 国家自然科学基金项目(61104095)
关键词 背景减除 混合高斯模型 ONLINE K—means ONLINE EM 灰度值 background subtraction Gaussian mixture model online K-means online EM gray value
  • 相关文献

参考文献18

  • 1Apolin6rio L, Armesto N, Cunqueiro L. An analysis of the influ- ence of background subtraction and quenching on jet observables in heavy-ion collisions [ J ]. Journal of High Energy Physics, 2013, 22: 1-33.
  • 2Shen Y, Hu W, Liu J, et al. Efficient background subtraction for real-time tracking in embedded camera networks [ C ]//Pro- ceedings of the 10th ACM Conference on Embedded Network Sensor Systems. Toronto, ON, Canada: ACM, 2012 : 295-308.
  • 3Bouwmans T. Recent advanced statistical background modeling for foreground detection-a systematic survey [ J ]. Recent Patents on Computer Science, 2011, 4(3) : 147-176.
  • 4Stauffer C, Gfimson W E L. Adaptive background mixture mod- els for real-time tracking [ C ]//Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Washington DC : IEEE, 1999, 22 ( 3 ) :747-757.
  • 5Shah M, Deng J, Woodford B. Illumination invariant background model using mixture of gaussians and SURF features [ C ]//Com- puter Vision-ACCV 2Q!2 Workshops. Berlin Heidelberg: Spring- er, 2013: 308-314.
  • 6刘鑫,刘辉,强振平,耿续涛.混合高斯模型和帧间差分相融合的自适应背景模型[J].中国图象图形学报,2008,13(4):729-734. 被引量:110
  • 7Liu Z, Huang K, Tan T. Foreground object detection using top- down information based on em framework[ J]. IEEE Transactions on Image Processing, 2012, 21 (9): 4204-4217.
  • 8Li Y, Li L. A novel split and merge EM algorithm for gaussian mixture model[ C ]//Proceedings of the 5 th International Confer- ence on Natural Computation. Tianjin: IEEE, 2009, 6: 479- 483.
  • 9Singh A, Jaikumm" P, Mitra S K, et al. Detection and tracking of objects in low contrast conditions[ C ]//Proceedings of IEEE Na- tional Conference on Computer Vision, Pattern Recognition, Im- age Processing and Graphics. Gandhinagar, India: IEEE, 2008 : 98-103.
  • 10Alpaydin E. Introduction to Machine Learning [ M ] Boston: MIT press, 2004:278-280.

二级参考文献33

  • 1朱明旱,罗大庸,曹倩霞.帧间差分与背景差分相融合的运动目标检测算法[J].计算机测量与控制,2005,13(3):215-217. 被引量:77
  • 2Horn BK, Schunk BG. Determining optical flow. Artificial Intelligence, 1981,17(1-3): 185-203.
  • 3Smith SM, Brady JM. ASSET-2: Real-Time motion segmentation and shape tracking. IEEE Trans. on PAMI, 1995,17(8):814-820.
  • 4Neff A, Colonnese S, Russo G, Talone P. Automatic moving object and background separation. Signal Processing, 1998,66(2):219-232.
  • 5Meier T, Ngan KN. Automatic segmentation of moving objects for video object plane generation. IEEE Trans. on Circuits and Systems for Video Technology, 1998,8(5):525-538.
  • 6Jolly MPD, Lakshmanan S, Jain AK. Vehicle segmentation and classification using deformable templates. IEEE Trans. on PAMI,1996,18(3):293-308.
  • 7Ridder C, Munkelt O, Kirchner H. Adaptive background estimation and foreground detection using Kalman-filter. In: Proc. of the Int'l Conf. on Recent Advances in Mechatronics, ICRAM'95. UNESCO Chair on Mechatronics, 1995. 193-199.
  • 8Friedman N, Russell S. Image segmentation in video sequences: A probabilistic approach. In: Proc. of the 13th Conf. on Uncertainty in Artificial Intelligence (UAI). San Francisco, 1997.
  • 9Stauffer C, Grimson WEL. Adaptive background mixture models for real-time tracking. In: Proc. of the IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, Vol 2. 1999. 246-252.
  • 10KaewTraKulPong P, Bowden R. An improved adaptive background mixture model for real-time tracking with shadow detection. In:The 2rid European Workshop on Advanced Video-based Surveillance Systems. Kingston upon Thames, 2001.

共引文献205

同被引文献170

引证文献20

二级引证文献97

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部