期刊文献+

一种基于非参数贝叶斯理论的语音增强算法

Speech Enhancement Based on Nonparametric Bayesian Method
下载PDF
导出
摘要 提出一种基于非参数贝叶斯理论的语音增强算法,在稀疏表示的框架下,把字典学习、稀疏系数表示和噪声方差估计融合成一个贝叶斯后验估计的过程,并利用Spike-Slab先验加强稀疏性.首先,将带噪语音分解为干净语音、高斯噪声和残余噪声3个子信号,分别对该3种子信号采用不同的先验概率模型表达,接着采用马尔科夫链-蒙特卡洛算法计算出3个模型中每个参数对应的后验概率,最后基于稀疏表示的框架重构出干净语音.实验数据使用NOIZEUS语音库,采用PESQ和SegSNR作为质量评价指标,分别在信噪比为0,5和10dB的高斯白噪声、火车噪声和街道噪声上验证了其可行性,并与多种常用语音增强方法进行对比,发现其在低信噪比非平稳噪声情况下的增强效果更为理想. A new speech enhancement strategy is proposed by utilizing a nonparametrie Bayesian method with Spike-blab priori(NBSP).As a sparse representation framework,the dictionary learning,sparse coefficients representation and noise variance estima-tion are replaced by a single procedure of Bayesian posterior estimation.First,the noisy speech is divided into clean speech,Gaussiannoise and rest noise.Then,each part is modeled with a certain priori distribution.Finally, upon the adoption of Markov Chain MonteCarlo sampling algorithm, the posterior distribution can be obtained, as the clean speech and all other parameters.Without knowingthe noise varianee,NBSPeould be performed directly on the noisy speech to infer the sparsity of the speech.Experiments were execu-ted on NOIZEUS database.Experiments are executed on noisy speeches from NOIZEUS database with SNR ranging from 0 dB to 10dB,which contain three types of noise (white,train and street). And the subjective and objective measures like PESQ score and theoutput SegSNR are implemented to evaluate the performance of NBSP and the other state-of-the-art methods.Corresponding resultsshow that NBSP achieves better performances,especially in conditions of non-stationary noise with low input SNR.
出处 《厦门大学学报(自然科学版)》 CAS CSCD 北大核心 2017年第3期423-428,共6页 Journal of Xiamen University:Natural Science
关键词 稀疏表示 非参数贝叶斯 Spike-Slab先验 自适应字典 语音增强 sparse representation nonparametrie Bayesian estimation Spike-Slab priori dictionary learning speech enhancement
  • 相关文献

参考文献2

二级参考文献11

  • 1V. N. Vapnik. The nature of statistical learningtheory. 1995.
  • 2M. E. Tipping. Sparse Bayesian learning and therelevance vector machine. Journal of MachineLearning Research, 1(2001), 211-244.
  • 3H. Takeda, S. Farsiu, and P. Milanfar. Robust kernelregression for restoration and reconstruction ofimages from sparse noisy data. IEEE InternationalConference on Image Processing, Atlanta, GA, Oct.2006, 1257-1260.
  • 4B. Demir and S. ErtUrk. Hyperspectral Imageclassification using relevance vector machines. IEEEGeoscience and Remote Sensing Letters, 4(2007)4,586—590.
  • 5A. Veeraxaghavan, K. Mitra, and R. Chellappa.Robust RVM regression using sparse outlier model.IEEE Computer Vision and Pattern RecognitionConference, San Francisco, U.S., June 2010,1887-1894.
  • 6Pang Xun and Jeff Gill. Spike and slab priordistributions for simultaneous bayesian hypothesistoting, model selection, and prediction, of nonlinearoutcome. Working Paper, Washington University inSt. Louis, http://poly_meth. wustl.edu/mediaDetail.php?docid=914, July 13,2009.
  • 7T. J. Mitchell and J. J. Beauchamp. Bayian variableselection in linear regression. Journal of the AmericanStatistical Association, 83(1988), 1023-1036.
  • 8I. G. Eduward and R. E. McCulloch. Variableselection via gibbs sampling. Journal of the AmericanStatistical Association, 88(1993), 881-889.
  • 9B. Chen, J. Paisley, and L. Carin. Sparse linearregression with beta process priors. IEEE AcousticsSpeech and Signal Processing Conference, Dallas,U.S., March 2010, 1234-1237.
  • 10M. Aharon, M. Elad, and A. M. Bruckstein. K-SVD:An algorithm for designing over complete dictionariesfor sparse representation. IEEE Transactions onSignal Processing、54(2006)11, 4311-4322.

共引文献9

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部