摘要
视听多模态建模已被验证在与语音分离的任务中性能表现优异,本文提出一种语音分离模型,对现有的时域音视频联合语音分离算法进行改进,增强音视频流之间的联系。针对现有音视频分离模型联合度不高的情况,作者提出一种在时域上将语音特征与额外输入的视觉特征进行多次融合,并加入纵向权值共享的端到端的语音分离模型。在GRID数据集上的实验结果表明,该网络与仅使用音频的时域语音卷积分离网络(Conv-TasNet)和音视频联合的Conv-TasNet相比,性能上分别获得了1.2 dB和0.4 dB的改善。
The audiovisual multimodal modeling has been verified to be effective in speech separation tasks.This paper proposes a speech separation model to improve the existing time-domain audio visual joint speech separation algorithm,and enhances the connection between audio and visual streams.Aiming at the situation that the existing audio-visual separation models are not highly integrated,authors propose a end to end model which combines audio features with additional input visual features multiple times in time domain,and adds the means of vertical weight sharing.The model was trained and evaluated on the GRID data set.Experiments show that compared with Conv-TasNet which only uses audio and Conv-TasNet combines with audio and video,the performance of our model is improved by 1.2 dB and 0.4 dB respectively.
作者
徐亮
王晶
杨文镜
罗逸雨
XU Liang;WANG Jing;YANG Wenjing;LUO Yiyu(School of Information and Electronics,Beijing Institute of Technology,Beijing 100081,China)
出处
《信号处理》
CSCD
北大核心
2021年第10期1799-1805,共7页
Journal of Signal Processing
基金
国家自然科学基金(62071039,61620106002)。
关键词
语音分离
深度神经网络
多特征融合
音视频联合
audio separation
deep neural network
multi feature fusion
audio-visual joint