期刊文献+

深层神经网络语音识别自适应方法研究 被引量:15

Adaptation method for deep neural network-based speech recognition
下载PDF
导出
摘要 为了解决语音识别中深层神经网络的说话人与环境自适应问题,从语音信号中的说话人与环境因素的固有特点出发,提出了使用长时特征的自适应方案。基于高斯混合模型建立说话人—环境联合补偿模型,对说话人与环境参数进行估计,将此参数作为长时特征,将估计出来的长时特征与短时特征一起送入深层神经网络进行训练。Aurora4实验表明,该方案可以有效地对说话人与环境因素进行分解,并提升自适应效果。 To handle the speaker and noise adaptation problem in deep neural network-based speech recognition system, this paper studied the inherent characters of speaker and noise random factors and proposed a new adaptation method using long term features. Firstly, it built a joint adaptation model based on Gaussian mixture models and estimated and used the parame- ters of speaker and noise factors as long term features. Then, it used these long term features in deep neural network together with traditional short term features. Experiment results on Aurora4 database show that this method can effectively factorize speaker and noise factors, and improve adaptation performance.
作者 邓侃 欧智坚
出处 《计算机应用研究》 CSCD 北大核心 2016年第7期1966-1970,共5页 Application Research of Computers
基金 国家自然科学基金资助项目(61075020 61473168)
关键词 语音识别 声学模型自适应 深层神经网络 speech recognition acoustic model adaptation deep neural networks
  • 相关文献

参考文献24

  • 1Lee L,Rose R C,Richard C.Speaker normalization using efficient frequency warping procedures[C] //Proc of IEEE International Conference on Acoustics,Speech,and Signal.[S.l.]:IEEE Press,1996:353-356.
  • 2Liu Fuhua,Stern R M,Huang Xuedong,et al.Efficient cepstral normalization for robust speech recognition[C] //Proc of Association for Computational Linguistics Workshop on Human Language Technology.1993:69-74.
  • 3Gales M J F.Maximum likelihood linear transformations for HMM-based speech recognition[J].Computer Speech & Language,1998,12(2):75-98.
  • 4Duin R P W,Loog M.Linear dimensionality reduction via a hete-roscedastic extension of LDA:the Chernoff criterion[J].IEEE Trans on Pattern Analysis and Machine Intelligence,2004,26(6):732-739.
  • 5Gong Yifan.Speech recognition in noisy environments:a survey[J].Speech Communication,1995,16(3):261-291.
  • 6Seide F,Li Gang,Chen Xie,et al.Feature engineering in context-dependent deep neural networks for conversational speech transcription[C] //Proc of IEEE Workshop on Automatic Speech Recognition and Understanding.[S.l.]:IEEE Press,2011:24-29.
  • 7Li Jinyu,Deng Li,Gong Yifan,et al.An overview of noise-robust automatic speech recognition[J].IEEE Trans on Audio,Speech and Language Processing,2014,22(4):745-777.
  • 8Siniscalchi S M,Yu Dong,Deng Li,et al.Speech recognition using long-span temporal patterns in a deep network model[J].Signal Processing Letters,2013,20(3):201-204.
  • 9Baccouche M,Besset B,Collen P,et al.Deep learning of split temporal context for automatic speech recognition[C] //Proc of IEEE International Conference on Acoustics,Speech,and Signal.[S.l.]:IEEE Press,2014:5422-5426.
  • 10Seltzer M L,Yu Dong,Wang Yongqiang.An investigation of deep neural networks for noise robust speech recognition[C] //Proc of IEEE International Conference on Acoustics,Speech,and Signal.[S.l.]:IEEE Press,2013:7398-7402.

同被引文献76

引证文献15

二级引证文献87

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部