The previously proposed syllable-synchronous network search (SSNS) algorithm plays a very important role in the word decoding of the continuous Chinese speech recognition and achieves satisfying performance. Several r...The previously proposed syllable-synchronous network search (SSNS) algorithm plays a very important role in the word decoding of the continuous Chinese speech recognition and achieves satisfying performance. Several related key factors that may affect the overall word decoding effect are carefully studied in this paper, including the perfecting of the vocabulary, the big-discount Turing re-estimating of the N-Gram probabilities, and the managing of the searching path buffers. Based on these discussions, corresponding approaches to improving the SSNS algorithm are proposed. Compared with the previous version of SSNS algorithm, the new version decreases the Chinese character error rate (CCER) in the word decoding by 42.1% across a database consisting of a large number of testing sentences (syllable strings).展开更多
After pointed the unreasonableness of the three basic assumptions contained in HMM, we introduce the theory and the advantage of Stochastic najectory Models (STMs) that possibly resolve these problems caused by HMM as...After pointed the unreasonableness of the three basic assumptions contained in HMM, we introduce the theory and the advantage of Stochastic najectory Models (STMs) that possibly resolve these problems caused by HMM assumptions. In STM, the acoustic observations of an acoustic unit are represented as clusters of trajectories in a parameter space.The trajectories are modelled by mixture of probability density functions of random sequence of states. After analyzing the characteristics of Chinese speech, the acoustic units for continuous Chinese speech recognition based on STM are discussed and phone-like units are suggested. The performance of continuous Chinese speech recognition based on STM is studied on VINICS system. The experimental results prove the efficiency of STM and the consistency of phone-like units.展开更多
in this paper a novel technique adopted in HarkMan is introduced. HarkMan is a keyword-spotter designed to automatically spot the given words of a vocabulary-independent task in unconstrained Chinese telephone speech....in this paper a novel technique adopted in HarkMan is introduced. HarkMan is a keyword-spotter designed to automatically spot the given words of a vocabulary-independent task in unconstrained Chinese telephone speech. The speak- ing manner and the number of keywords are not limited. This paper focuses on the novel technique which addresses acoustic modeling, keyword spotting network, search strategies, robustness, and rejection. The underlying technologies used in HarkMan given in this paper are useful not only for keyword spotting but also for continuous speech recognition. The system has achieved a figure-of-merit value over 90%.展开更多
In this paper we address the problem of audio-visual speech recognition in the framework of the multi-stream hidden Markov model. Stream weight training based on minimum classification error criterion is dis...In this paper we address the problem of audio-visual speech recognition in the framework of the multi-stream hidden Markov model. Stream weight training based on minimum classification error criterion is discussed for use in large vocabulary continuous speech recognition (LVCSR). We present the lattice re- scoring and Viterbi approaches for calculating the loss function of continuous speech. The experimental re- sults show that in the case of clean audio, the system performance can be improved by 36.1% in relative word error rate reduction when using state-based stream weights trained by a Viterbi approach, compared to an audio only speech recognition system. Further experimental results demonstrate that our audio-visual LVCSR system provides significant enhancement of robustness in noisy environments.展开更多
This paper presents a new discriminative approach for training Gaussian mixture models(GMMs)of hidden Markov models(HMMs)based acoustic model in a large vocabulary continuous speech recognition(LVCSR)system.This appro...This paper presents a new discriminative approach for training Gaussian mixture models(GMMs)of hidden Markov models(HMMs)based acoustic model in a large vocabulary continuous speech recognition(LVCSR)system.This approach is featured by embedding a rival penalized competitive learning(RPCL)mechanism on the level of hidden Markov states.For every input,the correct identity state,called winner and obtained by the Viterbi force alignment,is enhanced to describe this input while its most competitive rival is penalized by de-learning,which makes GMMs-based states become more discriminative.Without the extensive computing burden required by typical discriminative learning methods for one-pass recognition of the training set,the new approach saves computing costs considerably.Experiments show that the proposed method has a good convergence with better performances than the classical maximum likelihood estimation(MLE)based method.Comparing with two conventional discriminative methods,the proposed method demonstrates improved generalization ability,especially when the test set is not well matched with the training set.展开更多
Speaker variability is an important source of speech variations which makes continuous speech recognition a difficult task.Adapting automatic speech recognition(ASR) models to the speaker variations is a well-known st...Speaker variability is an important source of speech variations which makes continuous speech recognition a difficult task.Adapting automatic speech recognition(ASR) models to the speaker variations is a well-known strategy to cope with the challenge.Almost all such techniques focus on developing adaptation solutions within the acoustic models of the ASR systems.Although variations of the acoustic features constitute an important portion of the inter-speaker variations,they do not cover variations at the phonetic level.Phonetic variations are known to form an important part of variations which are influenced by both micro-segmental and suprasegmental factors.Inter-speaker phonetic variations are influenced by the structure and anatomy of a speaker's articulatory system and also his/her speaking style which is driven by many speaker background characteristics such as accent,gender,age,socioeconomic and educational class.The effect of inter-speaker variations in the feature space may cause explicit phone recognition errors.These errors can be compensated later by having appropriate pronunciation variants for the lexicon entries which consider likely phone misclassifications besides pronunciation.In this paper,we introduce speaker adaptive dynamic pronunciation models,which generate different lexicons for various speaker clusters and different ranges of speech rate.The models are hybrids of speaker adapted contextual rules and dynamic generalized decision trees,which take into account word phonological structures,rate of speech,unigram probabilities and stress to generate pronunciation variants of words.Employing the set of speaker adapted dynamic lexicons in a Farsi(Persian) continuous speech recognition task results in word error rate reductions of as much as 10.1% in a speaker-dependent scenario and 7.4% in a speaker-independent scenario.展开更多
文摘The previously proposed syllable-synchronous network search (SSNS) algorithm plays a very important role in the word decoding of the continuous Chinese speech recognition and achieves satisfying performance. Several related key factors that may affect the overall word decoding effect are carefully studied in this paper, including the perfecting of the vocabulary, the big-discount Turing re-estimating of the N-Gram probabilities, and the managing of the searching path buffers. Based on these discussions, corresponding approaches to improving the SSNS algorithm are proposed. Compared with the previous version of SSNS algorithm, the new version decreases the Chinese character error rate (CCER) in the word decoding by 42.1% across a database consisting of a large number of testing sentences (syllable strings).
文摘After pointed the unreasonableness of the three basic assumptions contained in HMM, we introduce the theory and the advantage of Stochastic najectory Models (STMs) that possibly resolve these problems caused by HMM assumptions. In STM, the acoustic observations of an acoustic unit are represented as clusters of trajectories in a parameter space.The trajectories are modelled by mixture of probability density functions of random sequence of states. After analyzing the characteristics of Chinese speech, the acoustic units for continuous Chinese speech recognition based on STM are discussed and phone-like units are suggested. The performance of continuous Chinese speech recognition based on STM is studied on VINICS system. The experimental results prove the efficiency of STM and the consistency of phone-like units.
文摘in this paper a novel technique adopted in HarkMan is introduced. HarkMan is a keyword-spotter designed to automatically spot the given words of a vocabulary-independent task in unconstrained Chinese telephone speech. The speak- ing manner and the number of keywords are not limited. This paper focuses on the novel technique which addresses acoustic modeling, keyword spotting network, search strategies, robustness, and rejection. The underlying technologies used in HarkMan given in this paper are useful not only for keyword spotting but also for continuous speech recognition. The system has achieved a figure-of-merit value over 90%.
基金Supported by the National High-Tech Research and Development (863) Program of China (No. 863-306-ZD03-01-2)
文摘In this paper we address the problem of audio-visual speech recognition in the framework of the multi-stream hidden Markov model. Stream weight training based on minimum classification error criterion is discussed for use in large vocabulary continuous speech recognition (LVCSR). We present the lattice re- scoring and Viterbi approaches for calculating the loss function of continuous speech. The experimental re- sults show that in the case of clean audio, the system performance can be improved by 36.1% in relative word error rate reduction when using state-based stream weights trained by a Viterbi approach, compared to an audio only speech recognition system. Further experimental results demonstrate that our audio-visual LVCSR system provides significant enhancement of robustness in noisy environments.
基金The work was supported in part by the National Natural Science Foundation of China(Grant No.90920302)the National Key Basic Research Program of China(No.2009CB825404)+2 种基金the HGJ Grant(No.2011ZX01042-001-001)a research program from Microsoft China,and by a GRF grant from the Research Grant Council of Hong Kong SAR(CUHK 4180/10E)Lei XU is also supported by Chang Jiang Scholars Program,Chinese Ministry of Education for Chang Jiang Chair Professorship in Peking University.
文摘This paper presents a new discriminative approach for training Gaussian mixture models(GMMs)of hidden Markov models(HMMs)based acoustic model in a large vocabulary continuous speech recognition(LVCSR)system.This approach is featured by embedding a rival penalized competitive learning(RPCL)mechanism on the level of hidden Markov states.For every input,the correct identity state,called winner and obtained by the Viterbi force alignment,is enhanced to describe this input while its most competitive rival is penalized by de-learning,which makes GMMs-based states become more discriminative.Without the extensive computing burden required by typical discriminative learning methods for one-pass recognition of the training set,the new approach saves computing costs considerably.Experiments show that the proposed method has a good convergence with better performances than the classical maximum likelihood estimation(MLE)based method.Comparing with two conventional discriminative methods,the proposed method demonstrates improved generalization ability,especially when the test set is not well matched with the training set.
文摘Speaker variability is an important source of speech variations which makes continuous speech recognition a difficult task.Adapting automatic speech recognition(ASR) models to the speaker variations is a well-known strategy to cope with the challenge.Almost all such techniques focus on developing adaptation solutions within the acoustic models of the ASR systems.Although variations of the acoustic features constitute an important portion of the inter-speaker variations,they do not cover variations at the phonetic level.Phonetic variations are known to form an important part of variations which are influenced by both micro-segmental and suprasegmental factors.Inter-speaker phonetic variations are influenced by the structure and anatomy of a speaker's articulatory system and also his/her speaking style which is driven by many speaker background characteristics such as accent,gender,age,socioeconomic and educational class.The effect of inter-speaker variations in the feature space may cause explicit phone recognition errors.These errors can be compensated later by having appropriate pronunciation variants for the lexicon entries which consider likely phone misclassifications besides pronunciation.In this paper,we introduce speaker adaptive dynamic pronunciation models,which generate different lexicons for various speaker clusters and different ranges of speech rate.The models are hybrids of speaker adapted contextual rules and dynamic generalized decision trees,which take into account word phonological structures,rate of speech,unigram probabilities and stress to generate pronunciation variants of words.Employing the set of speaker adapted dynamic lexicons in a Farsi(Persian) continuous speech recognition task results in word error rate reductions of as much as 10.1% in a speaker-dependent scenario and 7.4% in a speaker-independent scenario.