Motion segmentation plays an important role in many vision applications,yet it is still a challenging problem in complex scenes.The typical conditions in real world scenarios like illumination variations,dynamic backg...Motion segmentation plays an important role in many vision applications,yet it is still a challenging problem in complex scenes.The typical conditions in real world scenarios like illumination variations,dynamic backgrounds and camera shaking make negative effects on segmentation performance.In this paper,a newly designed method for robust motion segmentation is proposed,which is mainly composed of two interrelated models.One is a normal random model(N-model),and the other is called enhanced random model(E-model).They are constructed and updated in spatio-temporal information for adapting to illumination changes and dynamic backgrounds,and operate in an AdaBoost-like strategy.The exhaustive experimental evaluations on complex scenes demonstrate that the proposed method outperforms the state-of-the-art methods.展开更多
基金Supported by the National Natural Science Foundation of China(61502364)Key Scientific and Technological Project of Henan Province(132102210246)+1 种基金Enterprises-Universities-Research Institutes Cooperation Project of Henan Province(142107000022)CERNET Innovation Project(NGII20150311)
文摘Motion segmentation plays an important role in many vision applications,yet it is still a challenging problem in complex scenes.The typical conditions in real world scenarios like illumination variations,dynamic backgrounds and camera shaking make negative effects on segmentation performance.In this paper,a newly designed method for robust motion segmentation is proposed,which is mainly composed of two interrelated models.One is a normal random model(N-model),and the other is called enhanced random model(E-model).They are constructed and updated in spatio-temporal information for adapting to illumination changes and dynamic backgrounds,and operate in an AdaBoost-like strategy.The exhaustive experimental evaluations on complex scenes demonstrate that the proposed method outperforms the state-of-the-art methods.