摘要
递归神经网络(Recurrent Neural Network,RNN)如今已经广泛用于自动语音识别(Automatic Speech Recognition,ASR)的声学建模。虽然其较传统的声学建模方法有很大优势,但相对较高的计算复杂度限制了这种神经网络的应用,特别是在实时应用场景中。由于递归神经网络采用的输入特征通常有较长的上下文,因此利用重叠信息来同时降低声学后验和令牌传递的时间复杂度成为可能。该文介绍了一种新的解码器结构,通过有规律抛弃存在重叠的帧来获得解码过程中的计算开销降低。特别地,这种方法可以直接用于原始的递归神经网络模型,只需对隐马尔可夫模型(Hidden Markov Model,HMM)结构做小的变动,这使得这种方法具有很高的灵活性。该文以时延神经网络为例验证了所提出的方法,证明该方法能够在精度损失相对较小的情况下取得2~4倍的加速比。
Recurrent Neural Networks (RNN) are widely used for acoustic modeling in Automatic Speech Recognition (ASR). Although RNNs show many advantages over traditional acoustic modeling methods, the inherent higher computational cost limits its usage, especially in real-time applications. Noticing that the features used by RNNs usually have relatively long acoustic contexts, it is possible to lower the computational complexity of both posterior calculation and token passing process with overlapped information. This paper introduces a novel decoder structure that drops the overlapped acoustic frames regularly, which leads to a significant computational cost reduction in the decoding process. Especially, the new approach can directly use the original RNNs with minor modifications on the HMM topology, which makes it flexible. In experiments on conversation telephone speech datasets, this approach achieves 2 to 4 times speedup with little relative accuracy reduction.
出处
《电子与信息学报》
EI
CSCD
北大核心
2017年第4期930-937,共8页
Journal of Electronics & Information Technology
基金
国家自然科学基金(U1536117
11590770-4)
国家重点研发计划重点专项(2016YFB0801200
2016YFB0801203)
新疆维吾尔自治区科技重大专项(2016A03007-1)~~
关键词
语音识别
递归神经网络
解码器
跳帧计算
Speech recognition
Recurrent Neural Network (RNN)
Decoder
Frame skipping