Speech recognition systems have become a unique human-computer interaction(HCI)family.Speech is one of the most naturally developed human abilities;speech signal processing opens up a transparent and hand-free computa...Speech recognition systems have become a unique human-computer interaction(HCI)family.Speech is one of the most naturally developed human abilities;speech signal processing opens up a transparent and hand-free computation experience.This paper aims to present a retrospective yet modern approach to the world of speech recognition systems.The development journey of ASR(Automatic Speech Recognition)has seen quite a few milestones and breakthrough technologies that have been highlighted in this paper.A step-by-step rundown of the fundamental stages in developing speech recognition systems has been presented,along with a brief discussion of various modern-day developments and applications in this domain.This review paper aims to summarize and provide a beginning point for those starting in the vast field of speech signal processing.Since speech recognition has a vast potential in various industries like telecommunication,emotion recognition,healthcare,etc.,this review would be helpful to researchers who aim at exploring more applications that society can quickly adopt in future years of evolution.展开更多
In this study,vector quantization and hidden Markov models were used to achieve speech command recognition.Pre-emphasis,a hamming window,and Mel-frequency cepstral coefficients were first adopted to obtain feature val...In this study,vector quantization and hidden Markov models were used to achieve speech command recognition.Pre-emphasis,a hamming window,and Mel-frequency cepstral coefficients were first adopted to obtain feature values.Subsequently,vector quantization and HMMs(hidden Markov models)were employed to achieve speech command recognition.The recorded speech length was three Chinese characters,which were used to test the method.Five phrases pronounced mixing various human voices were recorded and used to test the models.The recorded phrases were then used for speech command recognition to demonstrate whether the experiment results were satisfactory.展开更多
Perhaps hearing impairment individuals cannot identify the environmental sounds due to noise around them.However,very little research has been conducted in this domain.Hence,the aim of this study is to categorize soun...Perhaps hearing impairment individuals cannot identify the environmental sounds due to noise around them.However,very little research has been conducted in this domain.Hence,the aim of this study is to categorize sounds generated in the environment so that the impairment individuals can distinguish the sound categories.To that end first we define nine sound classes--air conditioner,car horn,children playing,dog bark,drilling,engine idling,jackhammer,siren,and street music--typically exist in the environment.Then we record 100 sound samples from each category and extract features of each sound category using Mel-Frequency Cepstral Coefficients(MFCC).The training dataset is developed using this set of features together with the class variable;sound category.Sound classification is a complex task and hence,we use two Deep Learning techniques;Multi Layer Perceptron(MLP)and Convolution Neural Network(CNN)to train classification models.The models are tested using a separate test set and the performances of the models are evaluated using precision,recall and F1-score.The results show that the CNN model outperforms the MLP.However,the MLP also provided a decent accuracy in classifying unknown environmental sounds.展开更多
In recent years, the accuracy of speech recognition (SR) has been one of the most active areas of research. Despite that SR systems are working reasonably well in quiet conditions, they still suffer severe performance...In recent years, the accuracy of speech recognition (SR) has been one of the most active areas of research. Despite that SR systems are working reasonably well in quiet conditions, they still suffer severe performance degradation in noisy conditions or distorted channels. It is necessary to search for more robust feature extraction methods to gain better performance in adverse conditions. This paper investigates the performance of conventional and new hybrid speech feature extraction algorithms of Mel Frequency Cepstrum Coefficient (MFCC), Linear Prediction Coding Coefficient (LPCC), perceptual linear production (PLP), and RASTA-PLP in noisy conditions through using multivariate Hidden Markov Model (HMM) classifier. The behavior of the proposal system is evaluated using TIDIGIT human voice dataset corpora, recorded from 208 different adult speakers in both training and testing process. The theoretical basis for speech processing and classifier procedures were presented, and the recognition results were obtained based on word recognition rate.展开更多
The field of digital audio forensics aims to detect threats and fraud in audio signals.Contemporary audio forensic techniques use digital signal processing to detect the authenticity of recorded speech,recognize speak...The field of digital audio forensics aims to detect threats and fraud in audio signals.Contemporary audio forensic techniques use digital signal processing to detect the authenticity of recorded speech,recognize speakers,and recognize recording devices.User-generated audio recordings from mobile phones are very helpful in a number of forensic applications.This article proposed a novel method for recognizing recording devices based on recorded audio signals.First,a database of the features of various recording devices was constructed using 32 recording devices(20 mobile phones of different brands and 12 kinds of recording pens)in various environments.Second,the audio features of each recording device,such as the Mel-frequency cepstral coefficients(MFCC),were extracted from the audio signals and used as model inputs.Finally,support vector machines(SVM)with fractional Gaussian kernel were used to recognize the recording devices from their audio features.Experiments demonstrated that the proposed method had a 93.4%accuracy in recognizing recording devices.展开更多
This paper proposes a new phase feature derived from the formant instantaneous characteristics for speech recognition (SR) and speaker identification (SI) systems. Using Hilbert transform (HT), the formant chara...This paper proposes a new phase feature derived from the formant instantaneous characteristics for speech recognition (SR) and speaker identification (SI) systems. Using Hilbert transform (HT), the formant characteristics can be represented by instantaneous frequency (IF) and instantaneous bandwidth, namely formant instantaneous characteristics (FIC). In order to explore the importance of FIC both in SR and SI, this paper proposes different features from FIC used for SR and SI systems. When combing these new features with conventional parameters, higher identification rate can be achieved than that of using Mel-frequency cepstral coefficients (MFCC) parameters only. The experiment results show that the new features are effective characteristic parameters and can be treated as the compensation of conventional parameters for SR and SI.展开更多
An algorithm involving Mel-Frequency Cepstral Coefficients (MFCCs) is provided to perform signal feature extraction for the task of speaker accent recognition. Then different classifiers are compared based on the MFCC...An algorithm involving Mel-Frequency Cepstral Coefficients (MFCCs) is provided to perform signal feature extraction for the task of speaker accent recognition. Then different classifiers are compared based on the MFCC feature. For each signal, the mean vector of MFCC matrix is used as an input vector for pattern recognition. A sample of 330 signals, containing 165 US voice and 165 non-US voice, is analyzed. By comparison, k-nearest neighbors yield the highest average test accuracy, after using a cross-validation of size 500, and least time being used in the computation.展开更多
One-dimensional Mel-Frequency Cepstrum Coefficients (1D-MFCC) in conjunction with a power spectrum analysis method is usually used as a feature extraction in a speaker identification system. However, as this one dimen...One-dimensional Mel-Frequency Cepstrum Coefficients (1D-MFCC) in conjunction with a power spectrum analysis method is usually used as a feature extraction in a speaker identification system. However, as this one dimensional feature extraction subsystem shows low recognition rate for identifying an utterance speech signal under harsh noise conditions, we have developed a speaker identification system based on two-dimensional Bispectrum data that was theoretically more robust to the addition of Gaussian noise. As the processing sequence of ID-MFCC method could not be directly used for processing the two-dimensional Bispectrum data, in this paper we proposed a 2D-MFCC method as an extension of the 1D-MFCC method and the optimization of the 2D filter design using Genetic Algorithms. By using the 2D-MFCC method with the Bispectrum analysis method as the feature extraction technique, we then used Hidden Markov Model as the pattern classifier. In this paper, we have experimentally shows our developed methods for identifying an utterance speech signal buried with various levels of noise. Experimental result shows that the 2D-MFCC method without GA optimization has a comparable high recognition rate with that of 1D-MFCC method for utterance signal without noise addition. However, when the utterance signal is buried with Gaussian noises, the developed 2D-MFCC shows higher recognition capability, especially, when the 2D-MFCC optimized by Genetics Algorithms is utilized.展开更多
The performance of classic Mel-frequency cepstral coefficients (MFCC) is unsatisfactory in noisy environment with different sound sources from nature. In this paper, a classification approach of the ecological environ...The performance of classic Mel-frequency cepstral coefficients (MFCC) is unsatisfactory in noisy environment with different sound sources from nature. In this paper, a classification approach of the ecological environmental sounds using the double-level energy detection (DED) was presented. The DED was used to detect the existence of the sound signals under noise conditions. In addition, MFCC features from the frames which were detected the presence of the sound signals by DED were extracted. Experimental results show that the proposed technology has better noise immunity than classic MFCC, and also outperforms time-domain energy detection (TED) and frequency-domain energy detection (FED) respectively.展开更多
Wake-Up-Word Speech Recognition task (WUW-SR) is a computationally very demand, particularly the stage of feature extraction which is decoded with corresponding Hidden Markov Models (HMMs) in the back-end stage of the...Wake-Up-Word Speech Recognition task (WUW-SR) is a computationally very demand, particularly the stage of feature extraction which is decoded with corresponding Hidden Markov Models (HMMs) in the back-end stage of the WUW-SR. The state of the art WUW-SR system is based on three different sets of features: Mel-Frequency Cepstral Coefficients (MFCC), Linear Predictive Coding Coefficients (LPC), and Enhanced Mel-Frequency Cepstral Coefficients (ENH_MFCC). In (front-end of Wake-Up-Word Speech Recognition System Design on FPGA) [1], we presented an experimental FPGA design and implementation of a novel architecture of a real-time spectrogram extraction processor that generates MFCC, LPC, and ENH_MFCC spectrograms simultaneously. In this paper, the details of converting the three sets of spectrograms 1) Mel-Frequency Cepstral Coefficients (MFCC), 2) Linear Predictive Coding Coefficients (LPC), and 3) Enhanced Mel-Frequency Cepstral Coefficients (ENH_MFCC) to their equivalent features are presented. In the WUW- SR system, the recognizer’s frontend is located at the terminal which is typically connected over a data network to remote back-end recognition (e.g., server). The WUW-SR is shown in Figure 1. The three sets of speech features are extracted at the front-end. These extracted features are then compressed and transmitted to the server via a dedicated channel, where subsequently they are decoded.展开更多
The Mel-frequency cepstral coefficient (MFCC) is the most widely used feature in speech and speaker recognition. However, MFCC is very sensitive to noise interference, which tends to drastically de- grade the perfor...The Mel-frequency cepstral coefficient (MFCC) is the most widely used feature in speech and speaker recognition. However, MFCC is very sensitive to noise interference, which tends to drastically de- grade the performance of recognition systems because of the mismatches between training and testing. In this paper, the logarithmic transformation in the standard MFCC analysis is replaced by a combined function to improve the noisy sensitivity. The proposed feature extraction process is also combined with speech en- hancement methods, such as spectral subtraction and median-filter to further suppress the noise. Experi- ments show that the proposed robust MFCC-based feature significantly reduces the recognition error rate over a wide signal-to-noise ratio range.展开更多
A deep learning approach using long-short term memory(LSTM)networks was implemented in this study to classify the sound of short-term feeding behaviour of sheep,including biting,chewing,bolus regurgitation,and ruminat...A deep learning approach using long-short term memory(LSTM)networks was implemented in this study to classify the sound of short-term feeding behaviour of sheep,including biting,chewing,bolus regurgitation,and rumination chewing.The original acoustic signal was split into sound episodes using an endpoint detection method,where the thresholds of short-term energy and average zero-crossing rate were utilized.A discrete wavelet transform(DWT),Mel-frequency cepstral,and principal-component analysis(PCA)were integrated to extract the dimensionally reduced DWT based Mel-frequency cepstral coefficients(denoted by PW_MFCC)for each sound episode.Then,LSTM networks were employed to train classifiers for sound episode category classification.The performances of the LSTM classifiers with original Mel-frequency cepstral coefficients(MFCC),DWT based MFCC(denoted by W_MFCC),and PW_MFCC as the input feature coefficients were compared.Comparison results demonstrated that the introduction of DWT improved the classifier performance effectively,and PCA reduced the computational overhead without degrading classifier performance.The overall accuracy and comprehensive F1-score of the PW_MFCC based LSTM classifier were 94.97%and 97.41%,respectively.The classifier established in this study provided a foundation for an automatic identification system for sick sheep with abnormal feeding and rumination behaviour pattern.展开更多
In speech recognition systems, the physiological characteristics of the speech production model cause the voiced sections of the speech signal to have an attenuation of approximately 20 dB per decade. Many speech rec...In speech recognition systems, the physiological characteristics of the speech production model cause the voiced sections of the speech signal to have an attenuation of approximately 20 dB per decade. Many speech recognition algorithms have been developed to solve this problem by filtering the input signal with a single-zero high pass filter. Unfortunately, this technique increases the noise energy at high frequencies above 4 kHz, which in some cases degrades the recognition accuracy. This paper solves the problem using a pre-emphasis filter in the front end of the recognizer. The aim is to develop a modified parameterization approach taking into account the whole energy zone in the spectrum to improve the performance of the existing baseline recognition system in the acoustic phase. The results show that a large vocabulary speaker-independent continuous speech recognition system using this approach has a greatly improved recognition rate.展开更多
An English speech recognition system was implemented on a chip, called speech system-on-chip (SoC). The SoC included an application specific integrated circuit with a vector accelerator to improve performance. The s...An English speech recognition system was implemented on a chip, called speech system-on-chip (SoC). The SoC included an application specific integrated circuit with a vector accelerator to improve performance. The sub-word model based on a continuous density hidden Markov model recognition algorithm ran on a very cheap speech chip. The algorithm was a two-stage fixed-width beam-search baseline system with a variable beam-width pruning strategy and a frame-synchronous word-level pruning strategy to significantly reduce the recognition time. Tests show that this method reduces the recognition time nearly 6 fold and the memory size nearly 2 fold compared to the original system, with less than 1% accuracy degradation for a 600 word recognition task and recognition accuracy rate of about 98%.展开更多
文摘Speech recognition systems have become a unique human-computer interaction(HCI)family.Speech is one of the most naturally developed human abilities;speech signal processing opens up a transparent and hand-free computation experience.This paper aims to present a retrospective yet modern approach to the world of speech recognition systems.The development journey of ASR(Automatic Speech Recognition)has seen quite a few milestones and breakthrough technologies that have been highlighted in this paper.A step-by-step rundown of the fundamental stages in developing speech recognition systems has been presented,along with a brief discussion of various modern-day developments and applications in this domain.This review paper aims to summarize and provide a beginning point for those starting in the vast field of speech signal processing.Since speech recognition has a vast potential in various industries like telecommunication,emotion recognition,healthcare,etc.,this review would be helpful to researchers who aim at exploring more applications that society can quickly adopt in future years of evolution.
基金This research work was supported by the Ministry of Science and Technology of the Republic of China under contract MOST 108-2221-E-390-018.
文摘In this study,vector quantization and hidden Markov models were used to achieve speech command recognition.Pre-emphasis,a hamming window,and Mel-frequency cepstral coefficients were first adopted to obtain feature values.Subsequently,vector quantization and HMMs(hidden Markov models)were employed to achieve speech command recognition.The recorded speech length was three Chinese characters,which were used to test the method.Five phrases pronounced mixing various human voices were recorded and used to test the models.The recorded phrases were then used for speech command recognition to demonstrate whether the experiment results were satisfactory.
文摘Perhaps hearing impairment individuals cannot identify the environmental sounds due to noise around them.However,very little research has been conducted in this domain.Hence,the aim of this study is to categorize sounds generated in the environment so that the impairment individuals can distinguish the sound categories.To that end first we define nine sound classes--air conditioner,car horn,children playing,dog bark,drilling,engine idling,jackhammer,siren,and street music--typically exist in the environment.Then we record 100 sound samples from each category and extract features of each sound category using Mel-Frequency Cepstral Coefficients(MFCC).The training dataset is developed using this set of features together with the class variable;sound category.Sound classification is a complex task and hence,we use two Deep Learning techniques;Multi Layer Perceptron(MLP)and Convolution Neural Network(CNN)to train classification models.The models are tested using a separate test set and the performances of the models are evaluated using precision,recall and F1-score.The results show that the CNN model outperforms the MLP.However,the MLP also provided a decent accuracy in classifying unknown environmental sounds.
文摘In recent years, the accuracy of speech recognition (SR) has been one of the most active areas of research. Despite that SR systems are working reasonably well in quiet conditions, they still suffer severe performance degradation in noisy conditions or distorted channels. It is necessary to search for more robust feature extraction methods to gain better performance in adverse conditions. This paper investigates the performance of conventional and new hybrid speech feature extraction algorithms of Mel Frequency Cepstrum Coefficient (MFCC), Linear Prediction Coding Coefficient (LPCC), perceptual linear production (PLP), and RASTA-PLP in noisy conditions through using multivariate Hidden Markov Model (HMM) classifier. The behavior of the proposal system is evaluated using TIDIGIT human voice dataset corpora, recorded from 208 different adult speakers in both training and testing process. The theoretical basis for speech processing and classifier procedures were presented, and the recognition results were obtained based on word recognition rate.
基金supported by the Jiangsu University Student Training Program[SJCX19_0529]the research fund of Nanjing Institute of Engineering[CXY201931]the National Natural Science Foundation of China(61871213).
文摘The field of digital audio forensics aims to detect threats and fraud in audio signals.Contemporary audio forensic techniques use digital signal processing to detect the authenticity of recorded speech,recognize speakers,and recognize recording devices.User-generated audio recordings from mobile phones are very helpful in a number of forensic applications.This article proposed a novel method for recognizing recording devices based on recorded audio signals.First,a database of the features of various recording devices was constructed using 32 recording devices(20 mobile phones of different brands and 12 kinds of recording pens)in various environments.Second,the audio features of each recording device,such as the Mel-frequency cepstral coefficients(MFCC),were extracted from the audio signals and used as model inputs.Finally,support vector machines(SVM)with fractional Gaussian kernel were used to recognize the recording devices from their audio features.Experiments demonstrated that the proposed method had a 93.4%accuracy in recognizing recording devices.
基金Project supported by the National Natural Science Foundation of China (Grant No.60903186)the Shanghai Leading Academic Discipline Project (Grant No.J50104)
文摘This paper proposes a new phase feature derived from the formant instantaneous characteristics for speech recognition (SR) and speaker identification (SI) systems. Using Hilbert transform (HT), the formant characteristics can be represented by instantaneous frequency (IF) and instantaneous bandwidth, namely formant instantaneous characteristics (FIC). In order to explore the importance of FIC both in SR and SI, this paper proposes different features from FIC used for SR and SI systems. When combing these new features with conventional parameters, higher identification rate can be achieved than that of using Mel-frequency cepstral coefficients (MFCC) parameters only. The experiment results show that the new features are effective characteristic parameters and can be treated as the compensation of conventional parameters for SR and SI.
文摘An algorithm involving Mel-Frequency Cepstral Coefficients (MFCCs) is provided to perform signal feature extraction for the task of speaker accent recognition. Then different classifiers are compared based on the MFCC feature. For each signal, the mean vector of MFCC matrix is used as an input vector for pattern recognition. A sample of 330 signals, containing 165 US voice and 165 non-US voice, is analyzed. By comparison, k-nearest neighbors yield the highest average test accuracy, after using a cross-validation of size 500, and least time being used in the computation.
文摘One-dimensional Mel-Frequency Cepstrum Coefficients (1D-MFCC) in conjunction with a power spectrum analysis method is usually used as a feature extraction in a speaker identification system. However, as this one dimensional feature extraction subsystem shows low recognition rate for identifying an utterance speech signal under harsh noise conditions, we have developed a speaker identification system based on two-dimensional Bispectrum data that was theoretically more robust to the addition of Gaussian noise. As the processing sequence of ID-MFCC method could not be directly used for processing the two-dimensional Bispectrum data, in this paper we proposed a 2D-MFCC method as an extension of the 1D-MFCC method and the optimization of the 2D filter design using Genetic Algorithms. By using the 2D-MFCC method with the Bispectrum analysis method as the feature extraction technique, we then used Hidden Markov Model as the pattern classifier. In this paper, we have experimentally shows our developed methods for identifying an utterance speech signal buried with various levels of noise. Experimental result shows that the 2D-MFCC method without GA optimization has a comparable high recognition rate with that of 1D-MFCC method for utterance signal without noise addition. However, when the utterance signal is buried with Gaussian noises, the developed 2D-MFCC shows higher recognition capability, especially, when the 2D-MFCC optimized by Genetics Algorithms is utilized.
文摘The performance of classic Mel-frequency cepstral coefficients (MFCC) is unsatisfactory in noisy environment with different sound sources from nature. In this paper, a classification approach of the ecological environmental sounds using the double-level energy detection (DED) was presented. The DED was used to detect the existence of the sound signals under noise conditions. In addition, MFCC features from the frames which were detected the presence of the sound signals by DED were extracted. Experimental results show that the proposed technology has better noise immunity than classic MFCC, and also outperforms time-domain energy detection (TED) and frequency-domain energy detection (FED) respectively.
文摘Wake-Up-Word Speech Recognition task (WUW-SR) is a computationally very demand, particularly the stage of feature extraction which is decoded with corresponding Hidden Markov Models (HMMs) in the back-end stage of the WUW-SR. The state of the art WUW-SR system is based on three different sets of features: Mel-Frequency Cepstral Coefficients (MFCC), Linear Predictive Coding Coefficients (LPC), and Enhanced Mel-Frequency Cepstral Coefficients (ENH_MFCC). In (front-end of Wake-Up-Word Speech Recognition System Design on FPGA) [1], we presented an experimental FPGA design and implementation of a novel architecture of a real-time spectrogram extraction processor that generates MFCC, LPC, and ENH_MFCC spectrograms simultaneously. In this paper, the details of converting the three sets of spectrograms 1) Mel-Frequency Cepstral Coefficients (MFCC), 2) Linear Predictive Coding Coefficients (LPC), and 3) Enhanced Mel-Frequency Cepstral Coefficients (ENH_MFCC) to their equivalent features are presented. In the WUW- SR system, the recognizer’s frontend is located at the terminal which is typically connected over a data network to remote back-end recognition (e.g., server). The WUW-SR is shown in Figure 1. The three sets of speech features are extracted at the front-end. These extracted features are then compressed and transmitted to the server via a dedicated channel, where subsequently they are decoded.
基金Supported by the National Natural Science Foundation of China(No. 6007201)
文摘The Mel-frequency cepstral coefficient (MFCC) is the most widely used feature in speech and speaker recognition. However, MFCC is very sensitive to noise interference, which tends to drastically de- grade the performance of recognition systems because of the mismatches between training and testing. In this paper, the logarithmic transformation in the standard MFCC analysis is replaced by a combined function to improve the noisy sensitivity. The proposed feature extraction process is also combined with speech en- hancement methods, such as spectral subtraction and median-filter to further suppress the noise. Experi- ments show that the proposed robust MFCC-based feature significantly reduces the recognition error rate over a wide signal-to-noise ratio range.
基金This work was supported by the Basic Research Project of the Science and Technology Department of Qinghai province,China(Grant No.2020-ZJ-716)the Key Research and Development Project of the Science and Technology Department of Jiangsu province,China(Grant No.BE2018433)the Key Research and Development Project of the Science and Technology Department of Qinghai Province,China(Grant No.2017-HZ-813).
文摘A deep learning approach using long-short term memory(LSTM)networks was implemented in this study to classify the sound of short-term feeding behaviour of sheep,including biting,chewing,bolus regurgitation,and rumination chewing.The original acoustic signal was split into sound episodes using an endpoint detection method,where the thresholds of short-term energy and average zero-crossing rate were utilized.A discrete wavelet transform(DWT),Mel-frequency cepstral,and principal-component analysis(PCA)were integrated to extract the dimensionally reduced DWT based Mel-frequency cepstral coefficients(denoted by PW_MFCC)for each sound episode.Then,LSTM networks were employed to train classifiers for sound episode category classification.The performances of the LSTM classifiers with original Mel-frequency cepstral coefficients(MFCC),DWT based MFCC(denoted by W_MFCC),and PW_MFCC as the input feature coefficients were compared.Comparison results demonstrated that the introduction of DWT improved the classifier performance effectively,and PCA reduced the computational overhead without degrading classifier performance.The overall accuracy and comprehensive F1-score of the PW_MFCC based LSTM classifier were 94.97%and 97.41%,respectively.The classifier established in this study provided a foundation for an automatic identification system for sick sheep with abnormal feeding and rumination behaviour pattern.
基金Supported by the National High- TechnologyDevelopm ent Program of China(No.2 0 0 1AA1140 71)
文摘In speech recognition systems, the physiological characteristics of the speech production model cause the voiced sections of the speech signal to have an attenuation of approximately 20 dB per decade. Many speech recognition algorithms have been developed to solve this problem by filtering the input signal with a single-zero high pass filter. Unfortunately, this technique increases the noise energy at high frequencies above 4 kHz, which in some cases degrades the recognition accuracy. This paper solves the problem using a pre-emphasis filter in the front end of the recognizer. The aim is to develop a modified parameterization approach taking into account the whole energy zone in the spectrum to improve the performance of the existing baseline recognition system in the acoustic phase. The results show that a large vocabulary speaker-independent continuous speech recognition system using this approach has a greatly improved recognition rate.
基金Supported by the National Natural Science Foundation of China and Microsoft Research Asia(No. 60776800)the National Natural Science Foundation of China and Research Grants Council (No.60931160443)the National High-Tech Research and Development (863) Program of China(Nos. 2006AA010101,2007AA04Z223,2008AA02Z414,and 2008AA040201)
文摘An English speech recognition system was implemented on a chip, called speech system-on-chip (SoC). The SoC included an application specific integrated circuit with a vector accelerator to improve performance. The sub-word model based on a continuous density hidden Markov model recognition algorithm ran on a very cheap speech chip. The algorithm was a two-stage fixed-width beam-search baseline system with a variable beam-width pruning strategy and a frame-synchronous word-level pruning strategy to significantly reduce the recognition time. Tests show that this method reduces the recognition time nearly 6 fold and the memory size nearly 2 fold compared to the original system, with less than 1% accuracy degradation for a 600 word recognition task and recognition accuracy rate of about 98%.