唇语识别是将单个说话人嘴唇运动的无声视频翻译成文字的一种技术.由于嘴唇运动幅度较小,现有唇语识别方法的特征区分能力和泛化能力都较差.针对该问题,文中分别从时间、空间和通道三个维度研究唇语视觉特征的提纯问题,提出基于多重视...唇语识别是将单个说话人嘴唇运动的无声视频翻译成文字的一种技术.由于嘴唇运动幅度较小,现有唇语识别方法的特征区分能力和泛化能力都较差.针对该问题,文中分别从时间、空间和通道三个维度研究唇语视觉特征的提纯问题,提出基于多重视觉注意力的唇语识别方法(Lipreading Based on Multiple Visual Attention Network,LipMVA).首先利用通道注意力自适应校准通道级别的特征,减轻无意义通道的干扰.然后使用两种粒度不同的时空注意力,抑制不重要的像素或帧的影响.CMLR、GRID数据集上的实验表明LipMVA可降低识别错误率,由此验证方法的有效性.展开更多
The perception of human languages is inherently a multi-modalprocess, in which audio information can be compensated by visual information to improve the recognition performance. Such a phenomenon in English, German, S...The perception of human languages is inherently a multi-modalprocess, in which audio information can be compensated by visual information to improve the recognition performance. Such a phenomenon in English, German, Spanish and so on has been researched, but in Chinese it has not been reported yet. In our experiment, 14 syllables (/ba, bi, bian, biao, bin, de, di, dian, duo, dong, gai, gan, gen, gu/), extracted from Chinese audiovisual bimodal speech database CAVSR-1.0, were pronounced by 10 subjects. The audio-only stimuli, audiovisual stimuli, and visual-only stimuli were recognized by 20 observers. The audio-only stimuli and audiovisual stimuli both were presented under 5 conditions: no noise, SNR 0 dB, -8 dB, -12 dB, and -16 dB. The experimental result is studied and the following conclusions for Chinese speech are reached. Human beings can recognize visual-only stimuli rather well. The place of articulation determines the visual distinction. In noisy environment, audio information can remarkably be compensated by visual information and as a result the recognition performance is greatly improved.展开更多
文摘唇语识别是将单个说话人嘴唇运动的无声视频翻译成文字的一种技术.由于嘴唇运动幅度较小,现有唇语识别方法的特征区分能力和泛化能力都较差.针对该问题,文中分别从时间、空间和通道三个维度研究唇语视觉特征的提纯问题,提出基于多重视觉注意力的唇语识别方法(Lipreading Based on Multiple Visual Attention Network,LipMVA).首先利用通道注意力自适应校准通道级别的特征,减轻无意义通道的干扰.然后使用两种粒度不同的时空注意力,抑制不重要的像素或帧的影响.CMLR、GRID数据集上的实验表明LipMVA可降低识别错误率,由此验证方法的有效性.
基金This work was supported by the President Foundation of the Institute of Acoustics, Chinese Academy of Sciences (No.98-02) "863" High Tech R&D Project of China (No. 863-306-ZD-11-1).
文摘The perception of human languages is inherently a multi-modalprocess, in which audio information can be compensated by visual information to improve the recognition performance. Such a phenomenon in English, German, Spanish and so on has been researched, but in Chinese it has not been reported yet. In our experiment, 14 syllables (/ba, bi, bian, biao, bin, de, di, dian, duo, dong, gai, gan, gen, gu/), extracted from Chinese audiovisual bimodal speech database CAVSR-1.0, were pronounced by 10 subjects. The audio-only stimuli, audiovisual stimuli, and visual-only stimuli were recognized by 20 observers. The audio-only stimuli and audiovisual stimuli both were presented under 5 conditions: no noise, SNR 0 dB, -8 dB, -12 dB, and -16 dB. The experimental result is studied and the following conclusions for Chinese speech are reached. Human beings can recognize visual-only stimuli rather well. The place of articulation determines the visual distinction. In noisy environment, audio information can remarkably be compensated by visual information and as a result the recognition performance is greatly improved.