摘要
由于能够很好地近似描述任何分布,混合高斯模型(GMM)在模式在识别领域得到了广泛的应用。GMM模型参数通常使用迭代的期望最大化(EM)算法训练获得,当训练数据量非常庞大及模型混合数很大时,需要花费很长的训练时间。NVIDIA公司推出的统一计算设备架构(Computed unified device architecture,CUDA)技术通过在图形处理单元(GPU)并发执行多个线程能够实现大规模并行快速计算。本文提出一种基于CUDA,适用于特大数据量的GMM模型快速训练方法,包括用于模型初始化的K-means算法的快速实现方法,以及用于模型参数估计的EM算法的快速实现方法。文中还将这种训练方法应用到语种GMM模型训练中。实验结果表明,与Intel DualCore PentiumⅣ3.0GHz CPU的一个单核相比,在NVIDIA GTS250GPU上语种GMM模型训练速度提高了26倍左右。
Due to its good property to provide an approximation to any distribution, Gaussian mixture model (GMM) is widely applied in the field of pattern recognition. Usually, the iteratire expectation-maximization (EM) algorithm is applied to GMM parameter estimation. The computational complexity at model training procedure can become extremely high when large amounts of training data and large mixture number are engaged. The computed unified device architecture (CUDA) technology provided by NVIDIA Corporation can perform fast parallel computation by running thousands of threads simultaneously on graphic processing unit (GPU). A fast GMM model training implementation using CUDA is presented, which is espe- cially applicable to large amounts of training data. The fast training implementation contains two parts, i. e. , the K-means algorithm for model initialization and the EM algorithm for pa- rameter estimation. Furthermore, the fast training method is applied to language GMMs train- ing. Experimental results show that language model training using GPU is about 26 times faster on NVIDIA GTS250 than the traditional implementation on one of the single core of Intel DualCore Pentium IV 3.0 GHz CPU.
出处
《数据采集与处理》
CSCD
北大核心
2012年第1期85-90,共6页
Journal of Data Acquisition and Processing
关键词
混合高斯模型
语种识别
图形处理单元
统一计算设备架构
Caussian mixture model (GMM)
language identification
graphic processing unit (GPU)
computed unified device architecture (CUDA)