摘要
为了提高语音分离的效果,除了利用混合的语音信号,还可以借助视觉信号作为辅助信息。这种融合了视觉与音频信号的多模态建模方式,已被证实可以有效地提高语音分离的性能,为语音分离任务提供了新的可能性。为了更好地捕捉视觉与音频特征中的长期依赖关系,并强化网络对输入上下文信息的理解,本文提出了一种基于一维扩张卷积与Transformer的时域视听融合语音分离模型。将基于频域的传统视听融合语音分离方法应用到时域中,避免了时频变换带来的信息损失和相位重构问题。所提网络架构包含四个模块:一个视觉特征提取网络,用于从视频帧中提取唇部嵌入特征;一个音频编码器,用于将混合语音转换为特征表示;一个多模态分离网络,主要由音频子网络、视频子网络,以及Transformer网络组成,用于利用视觉和音频特征进行语音分离;以及一个音频解码器,用于将分离后的特征还原为干净的语音。本文使用LRS2数据集生成的包含两个说话者混合语音的数据集。实验结果表明,所提出的网络在尺度不变信噪比改进(Scale-Invariant Signal-to-Noise Ratio Improvement,SISNRi)与信号失真比改进(Signal-to-Distortion Ratio Improvement,SDRi)这两种指标上分别达到14.0 dB与14.3 dB,较纯音频分离模型和普适的视听融合分离模型有明显的性能提升。
To improve the performance of speech separation,other than using the mixed speech signal,the visual signal may also serve as auxiliary information.This multimodal modeling method that integrates visual and audio signals has been proven to effectively improve the performance of speech separation and provides new possibilities for speech separation tasks.To better capture the long-term dependencies in visual and audio features and enhance the network’s under-standing of contextual information in the input,this study proposes a time-domain audio-visual fusion speech separation model based on a one-dimensional dilated convolution and Transformer.The traditional audio-visual fusion speech sepa-ration method based on the frequency domain is applied to the time domain,avoiding the information loss and phase re-construction problems caused by time-frequency transformation.The proposed network architecture consists of four mod-ules:a visual feature extraction network,which extracts lip embedding features from video frames;an audio encoder,which converts the mixed speech into feature representation;a multimodal separation network,which consists of an au-dio subnetwork,video subnetwork,and Transformer network and uses visual and audio features for speech separation;and an audio decoder,which restores the separated features to clean speech.This study uses the LRS2 dataset to gener-ate a dataset containing the mixed speech of two speakers.Experimental results reveal that the proposed network attains 14.0 dB and 14.3 dB improvements in scale-invariant signal-to-noise ratio and signal-to-distortion ratio metrics,respec-tively,significantly outperforming both the pure audio separation and universal audio-visual fusion models.
作者
刘宏清
谢奇洲
赵宇
周翊
LIU Hongqing;XIE Qizhou;ZHAO Yu;ZHOU Yi(School of Communication and Information Engineering,Chongqing University of Posts and Telecommunications,Chongqing 400065,China)
出处
《信号处理》
CSCD
北大核心
2024年第7期1208-1217,共10页
Journal of Signal Processing
基金
重庆市自然科学基金面上项目(CSTB2022NSCQ-MSX0990)
重庆市教委科学技术研究项目(KJQN202000612)。
关键词
语音分离
视听融合
多头自注意力机制
扩张卷积
speech separation
audio-visual fusion
multi-head self-attention mechanism
dilated convolution