摘要
将多传感器信息融合技术用于说话人跟踪问题,提出了一种基于动态贝叶斯网络的音视频联合说话人跟踪方法.在动态贝叶斯网络中,该方法分别采用麦克风阵列声源定位、人脸肤色检测以及音视频互信息最大化三种感知方式获取与说话人位置相关的量测信息;然后采用粒子滤波对这些信息进行融合,通过贝叶斯推理实现说话人的有效跟踪;并运用信息熵理论对三种感知方式进行动态管理,以提高跟踪系统的整体性能.实验结果验证了本文方法的有效性.
Multi-sensor data fusion technique is applied to speaker tracking problem, and a novel audio-visual speaker tracking approach based on dynamic Bayesian network is proposed. Based on the complementarity and redundancy between speech and image of a speaker, three kinds of perception methods, including sound source localization based on microphone array, face detection based on skin color information, and maximization mutual information based on audio-visual synchronization, are proposed to acquire the tracking information. In the framework of dynamic Bayesian network, particle filtering is used to fuse the tracking information, and perception management is achieved to improve the tracking efficiency by information entropy theory. Experiments using real-world data demonstrate that the proposed method can robustly track the speaker even in the presence of perturbing factors such as high room reverberation and video occlusions.
出处
《自动化学报》
EI
CSCD
北大核心
2008年第9期1083-1089,共7页
Acta Automatica Sinica
基金
国家自然科学基金(60772161
60372082)资助~~
关键词
说话人跟踪
动态贝叶斯网络
粒子滤波
麦克风阵列
Speaker tracking, dynamic Bayesian network, particle filter, microphone array