期刊文献+

基于CUDA的GMM模型快速训练方法 被引量:3

CUDA-Based Fast GMM Model Training Method and Its Application
下载PDF
导出
摘要 由于能够很好地近似描述任何分布,混合高斯模型(GMM)在模式在识别领域得到了广泛的应用。GMM模型参数通常使用迭代的期望最大化(EM)算法训练获得,当训练数据量非常庞大及模型混合数很大时,需要花费很长的训练时间。NVIDIA公司推出的统一计算设备架构(Computed unified device architecture,CUDA)技术通过在图形处理单元(GPU)并发执行多个线程能够实现大规模并行快速计算。本文提出一种基于CUDA,适用于特大数据量的GMM模型快速训练方法,包括用于模型初始化的K-means算法的快速实现方法,以及用于模型参数估计的EM算法的快速实现方法。文中还将这种训练方法应用到语种GMM模型训练中。实验结果表明,与Intel DualCore PentiumⅣ3.0GHz CPU的一个单核相比,在NVIDIA GTS250GPU上语种GMM模型训练速度提高了26倍左右。 Due to its good property to provide an approximation to any distribution, Gaussian mixture model (GMM) is widely applied in the field of pattern recognition. Usually, the iteratire expectation-maximization (EM) algorithm is applied to GMM parameter estimation. The computational complexity at model training procedure can become extremely high when large amounts of training data and large mixture number are engaged. The computed unified device architecture (CUDA) technology provided by NVIDIA Corporation can perform fast parallel computation by running thousands of threads simultaneously on graphic processing unit (GPU). A fast GMM model training implementation using CUDA is presented, which is espe- cially applicable to large amounts of training data. The fast training implementation contains two parts, i. e. , the K-means algorithm for model initialization and the EM algorithm for pa- rameter estimation. Furthermore, the fast training method is applied to language GMMs train- ing. Experimental results show that language model training using GPU is about 26 times faster on NVIDIA GTS250 than the traditional implementation on one of the single core of Intel DualCore Pentium IV 3.0 GHz CPU.
出处 《数据采集与处理》 CSCD 北大核心 2012年第1期85-90,共6页 Journal of Data Acquisition and Processing
关键词 混合高斯模型 语种识别 图形处理单元 统一计算设备架构 Caussian mixture model (GMM) language identification graphic processing unit (GPU) computed unified device architecture (CUDA)
  • 相关文献

参考文献11

  • 1Dempster A P, Laird N M, Rubin D B. Maximum-likelihood from incomplete data via the EM algorithm[J]. Royal Statist Soc Ser B, 1977, 39 (1) : 1-38.
  • 2Torres-Carrasquillo P A, Singer E, Kohler M A, et al. Approaches to language identification using Gaussian mixture models and shifted delta cepstral features[C]//Proc ICSLP 2002. Colorado, USA: [s. n. ], 2002: 89-92.
  • 3Kumar N, Satoor S, Buck I. Fast parallel expectation maximization for Gaussian mixture models on gpus using cuda[C]//Proc 11th IEEE International Conference on High Performance Computing and Communications. Washington DC, USA: IEEE Computer Society Press, 2009: 103-109.
  • 4Bai Hongtao, He Lili, Ouyang Dantong, et al. Kmeans on commodity GPUs with CUDA[C]//Proc the 2009 WRI World Congress on Computer Science and Information Engineering. Washington DC, USA: IEEE Computer Society Press, 2009: 651- 655.
  • 5Zechner M, Granitzer M. Accelerating K-means on the graphics processor via CUDA[C]//Proc the ?-009 First International Conference on Intensive Applications and Services. Washington DC, USA.. IEEE Computer Society Press, 2009: 7-15.
  • 6Bilmes J A. A gentle tutorial of the EM algorithm and its application to parameter estimation for gaussian mixture and hidden markov models [R]. ISCI Technical Report TR-97-021, USA: ISCI,1998.
  • 7NVIDIA CUDA programming guide-version 2.2 available [EB/OL] http ://www. nvidia, com/object/ cuda_develop, html.
  • 8Goldberg D. What every computer scientist should know about floating point arithmetic[J]. ACM Comput Surv, 1991, 23 (1): 5-48.
  • 9Wong E, Sridharan S. Methods to improve Gaussian mixture model based language identification system [C]//Proe ICSLP 2002. Colorado, USA: [s. n.], 2002 : 93-96.
  • 10付强,宋彦,戴礼荣.因子分析在基于GMM的自动语种识别中的应用[J].中文信息学报,2009,23(4):77-81. 被引量:4

二级参考文献11

  • 1E.Singer,P.A.Torres-Carrasquillo,T.P.Gleason,W.M.Campbell,and D.A.Reynolds.Acoustic,Phonetic,and Discriminative approaches to Automatic Language Identification[C]//Proc.Eurospeech 2003,Sept.2003:1345-1348.
  • 2P.A.Torres-Carrasquillo,E.Singer,M.A.Kohler,R.J.Greene,D.A.Reynolds,and J.R.Deller,Jr.Approaches to language identification using Gaussian mixture models and shifted delta cepstral features[C]//Proc.ICSLP,Colorado,USA:Sept.2002,89-92.
  • 3Patrick Kenny,G.Boulianne,P.Ouellet and P.Dumouchel.Speaker and Session Variability in GMM-Based Speaker Verification[J].IEEE Transactions on Audio,Speech and Language Processing,May 2007,15(4):1448-1460.
  • 4C.Vair,D.Cotibro,F.Castaldo,E.Dalmasso,and P.Laface.Channel factors compensation in model and feature domain for speaker recognition[C]//Proc.IEEE Odyssey,San Juan,PR:Jun.2006,CD-ROM.
  • 5NIST 2007 LRE Plan[EB/OL],http://www.itl.nist.gov/iad/mig//tests/lang/2007.
  • 6Gauvain,J.-L,Chin-Hui Lee.Maximum a posteriori estimation for multivariate Gaussian mixture observations of Markov chains[J].IEEE Transactions on Speech and Audio Processing,1994,2(2):291-298.
  • 7P.Kenny,G.Boulianne,and P.Dumouchel.Eigenvoice modeling with sparse training data[J].IEEE Transactions on Speech Audio Processing,May 2005,13(3):p345-354.
  • 8Callfriend corpus,telephone speech of 15 different languages or dialects[DB/OL],/www.ldc.upenn.edu/Catalog.
  • 9LORI F.LAMEL,LAWRENCE R.RABINER.An Improved Endpoint Detector for Isolated W0rd Recognition[J].IEEE Transactions on Acoustics,Speech,and Signal Processing,Aug 1981,29(4):777-785.
  • 10Douglas A.Reynolds,Thomas F.Quatieri and Robert B.Dunn.Speaker verification using adapted Gaussian mixture models[J].Digital Signal Processing,Jan.2000,10:19-41.

共引文献3

同被引文献21

引证文献3

二级引证文献11

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部