摘要
目前的跟踪技术主要是用完全基于声音或视觉的传感器,但音频定位具有精度差而覆盖面广的特点,视觉跟踪具有定位精度高而受摄录角度限制的特点,以至于在复杂环境下难以取得理想的跟踪效果。针对这一问题,提出了一种利用从立体视觉和立体音频得到的融合信息对三维物体进行目标跟踪的新方法,介绍了一个包含2个麦克风和立体视觉的简单跟踪系统,由这2个系统提供的定位估计使用一种改进的PSO算法(TRIBES)来融合、综合2种传感器各自的优点。实验表明:与传统的方法相比,这种新技术可以实现更快、更精确的跟踪性能。
Present tracking technology is mainly used solely based on sound or visual sensor.However,audio positioning possess characteristic of poor accuracy and covering a wide range and visual tracking has high positioning precision and limited by video angle.So it is difficult to obtain ideal tracking results in complex environment.To solve this problem,a new method for detecting and tracking object in 3D space using audio and video fused information is proposed.A simple tracking system with two microphones and stereo vision is introduced.The localization estimation provided by these two systems are fused using an improved particle swarm optimization(PSO)TRIBES algorithm.The experiments show that,compared with the traditional method,this new technology can achieve faster and more accurate tracking performance.
出处
《传感器与微系统》
CSCD
北大核心
2013年第6期47-49,52,共4页
Transducer and Microsystem Technologies