期刊文献+

临近最优主动学习的藏语语音识别方法研究 被引量:3

Near-optimal active learning for Tibetan speech recognition
下载PDF
导出
摘要 语音识别模型需要大量带标注语音语料进行训练,作为少数民族语言的藏语,由于语音标注专家十分匮乏,人工标注语音语料是一件非常费时费力的工作。然而,主动学习方法可以根据语音识别的目标从大量未标注的语音数据中挑选一些具有价值的样本交给用户进行标注,以便利用少量高质量的训练样本构建与大数据量训练方式一样精准的识别模型。研究了基于主动学习的藏语拉萨话语音语料选择方法,提出了一种临近最优的批量样本选择目标函数,并验证了其具有submodular函数性质。通过实验验证,该方法能够使用较少的训练数据保证语音识别模型的精度,从而减少了人工标注语料的工作量。 A large number of annotated speech corpus is needed to train speech recognition models.Tibetan language is one of Chinese ethnic minority languages,it lacks the annotator.So it is very time-consuming and costly for labeling Tibetan speech data.However,the active learning method can select a number of informative samples from unlabeled data according to the target of speech recognition to the user for annotation,in order to use a small amount of high quality training sample to build the accurate recognition models.This paper studies the method of speech data selection for Lhasa-Tibetan speech recognition based on active learning,and proposes a near-optimal batch mode objective function,and proves this objective function is submodular function.The experimental results show that the presented method can use less training data to ensure the accuracy of speech recognition model,and can reduce the workload of manual annotation.
作者 赵悦 李要嫱 徐晓娜 吴立成 ZHAO Yue;LI Yaoqiang;XU Xiaona;WU Licheng(School of Information Engineering,Minzu University of China,Beijing 100081,China)
出处 《计算机工程与应用》 CSCD 北大核心 2018年第22期156-159,215,共5页 Computer Engineering and Applications
基金 教育部人文社科规划项目(No.15YJAZH120)
关键词 临近最优批量主动学习 submodular函数 语音语料选择 藏语拉萨话语音识别 near-optimal batch mode active learning submodular function speech corpus selection Lhasa-Tibetan speech recognition
  • 相关文献

参考文献2

二级参考文献13

  • 1Dahl G E, Yu D, Deng L, et al. Context-Dependent Pre-trained Deep Neural Networks for Large Vocabulary Speech Recognition.IEEE Trans on Audio, Speech, and Language Processing, 2012, 20 ( 1 ) : 30-42.
  • 2Hinton G E, Osindero S, Teh Y W. A Fast Learning Algorithm for Deep Belief Nets. Neural Computation, 2006, 18(7) : 1527-1554.
  • 3Beulen K, Ney H. Automatic Question Generation for Decision Tree Based State Tying//Proc of the IEEE International Conference on Acoustics, Speech and Signal Processing. Seattle, USA, 1998, II: 805 -805.
  • 4Singh R, Raj B, Stern R M. Automatic Clustering and Generation of Contextual Questions for Tied States in Hidden Markov Models // Proc of the IEEE International Conference on Acoustics, Speech and Signal Processing. Phoenix, USA, 1999, I: 117-120.
  • 5Huang J T, Li J Y, Yu D, et al. Cross-Language Knowledge Trans- fer Using Muhilingual Deep Neural Network with Shared Hidden Layers//Proc of the IEEE International Conference on Acoustics, Speech and Signal Processing. Vancouver, Canada, 2013 : 7304- 7308.
  • 6Carteira-Perpinan M A, Hinton G E. On Contrastive Divergence Learning. [ EB/OL ]. [ 2013 - 02 - 15 ]. www. doein, com/p - 33657so63. html.
  • 7Mohamed A, Dahl G E, Hinton G. Acoustic Modeling Using Deep Belief Networks. IEEE Trans on Audio, Speech, and Language Processing, 2012, 20( 1 ) : 14-22.
  • 8Erhan D, Bengio Y, Courville A, et al. Why Does Unsupervised Pre-training Help Deep Learning? Journal of Machine Learning Research. 2010, 11:625-660.
  • 9Deng L, Seltzer M, Yu D, et al. Binary Coding of Speech Spectro- grams Using a Deep Auto-Encoder // Proc of the 11th Annual Conference of the International Speech Communication Association. Makuhari, Japan, 2010:1692-1695.
  • 10姚徐,李永宏,单广荣,于洪志.藏语孤立词语音识别系统研究[J].西北民族大学学报(自然科学版),2009,30(1):29-36. 被引量:10

共引文献17

同被引文献71

引证文献3

二级引证文献7

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部