期刊文献+

基于生成对抗网络的语音信号分离 被引量:6

Speech Signal Separation Based on Generative Adversarial Networks
下载PDF
导出
摘要 基于深度学习的单声道语音分离需要计算时频掩蔽,但现有语音分离方法中时频掩蔽不可学习,也未将其封装到深度学习中进行优化,通常依赖于维纳滤波法进行后续处理。为此,提出一种基于生成对抗网络的语音信号分离方法。在语音生成阶段引入递归推导算法和稀疏编码器来改进时频掩蔽生成结果,并将生成的语音输入至判别器进行分类,以降低信号源之间的扰动。实验结果表明,与基于深度神经网络的语音信号分离方法相比,该方法的SDR、SIR分离指标分别提高6.2 dB和5.0 dB。 The single-channel speech separation based on deep learning needs to calculate the time-frequency masking,which,however,cannot be learnt in the existing methods.Moreover,the time-frequency masking is not encapsulated in in-depth learning for optimization,so it relies on Wiener filtering for subsequent processing.Therefore,this paper proposes a speech signal separation method based on Generative Adversarial Networks(GAN).In the speech generation stage,the recursive derivation algorithm and sparse encoder are introduced to improve the time-frequency generation results.Then,the generated speach is eatered into the discriminator for classification,so as to reduce the disturbance between signal sources.The experimental results show that compared with other speech signal separation methods,such as the codec-based method and the recurrent neural network-based method,the SDR and SIR separation indexes of the proposed method increase by 6.2 dB and 5.0 dB respectively.
作者 刘航 李扬 袁浩期 王俊影 LIU Hang;LI Yang;YUAN Haoqi;WANG Junying(School of Electromechanical Engineering,Guangdong University of Technology,Guangzhou 510006,China)
出处 《计算机工程》 CAS CSCD 北大核心 2020年第1期302-308,共7页 Computer Engineering
基金 广东省科技计划项目(2013B011304008,2013B090600031) 佛山市产学研专项资金项目(2012HC100195)
关键词 单声道语音分离 生成对抗网络 时频掩蔽 递归推导 稀疏编码器 single-channel speech separation Generative Adversarial Networks(GAN) time-frequency masking recursive derivation sparse encoder
  • 相关文献

参考文献8

二级参考文献48

  • 1邹霞,陈亮,张雄伟.基于Gamma语音模型的语音增强算法[J].通信学报,2006,27(10):118-123. 被引量:11
  • 2王珊,许刚.基于计算听觉场景分析的语音混叠信号分离[J].计算机工程,2007,33(18):211-213. 被引量:1
  • 3YANG Lu, Loizou P C. A geometric approach to spectral subtraction. Speech Communication, 2008: 50:453-466.
  • 4HAO Jiucang, Hagai Attias, Srikantan Nagarajan, Te-Won Lee, Terrence J. Sejnowski. Speech enhancement, gain, and noise Spectrum adaptation using approximate bayesian es- timation. IEEE Trans. on Audio, Speech and Language Processing, 2009: 17(1): 24-37.
  • 5Kris Hermus, Patrick Wambacq, van Hamme H. A reviewof signal subspace speech enhancement and its application to noise robust speech recognition. EURASIP Journal on Advances in Signal Processing, 2007:1-15.
  • 6Sriram Srinivasan, Jonas Samuelsson, Bastiaan Kleijn W. Codebook-based bayesian speech enhancement for nonsta- tionary environments. IEEE Trans. on Audio, Speech and Language Processing, 2007: 15(2): 441-452.
  • 7Hiroko Kato Solvang, Yuichi Nagahara, Shoko Araki, Hi- roshi Sawada, Shoji Makino. Frequency-domain pearson distribution approach for independent component analy- sis (FD-Pearson-ICA) in blind source separation. IEEE Trans. on Audio, Speech and Language Processing, 2009: 17(4): 639-649.
  • 8J-rSme Bobin, Jean-Luc Starck, Jalal M. Fadili, Yassir Moudden, David L. Donoho. Morphological component analysis: an adaptive thresholding strategy. IEEE Trans. on image processing, 2007: 16(11): 2675-2681.
  • 9Romain Hennequin, Roland Badeau, Bertrand David. NMF with time-frequency activations to model non sta- tionary audio events. IEEE Trans. on Audio, Speech and Language Processing, 2010: 19(4): 744-753.
  • 10XU Tao, WANG Wenwu. A block-based compressed sens- ing method for underdetermined blind speech separation incorporating binary mask. ICASSP, 2010:2022-2025.

共引文献98

同被引文献34

引证文献6

二级引证文献8

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部