In recent years,speech synthesis systems have allowed for the produc-tion of very high-quality voices.Therefore,research in this domain is now turning to the problem of integrating emotions into speech.However,the met...In recent years,speech synthesis systems have allowed for the produc-tion of very high-quality voices.Therefore,research in this domain is now turning to the problem of integrating emotions into speech.However,the method of con-structing a speech synthesizer for each emotion has some limitations.First,this method often requires an emotional-speech data set with many sentences.Such data sets are very time-intensive and labor-intensive to complete.Second,training each of these models requires computers with large computational capabilities and a lot of effort and time for model tuning.In addition,each model for each emotion failed to take advantage of data sets of other emotions.In this paper,we propose a new method to synthesize emotional speech in which the latent expressions of emotions are learned from a small data set of professional actors through a Flow-tron model.In addition,we provide a new method to build a speech corpus that is scalable and whose quality is easy to control.Next,to produce a high-quality speech synthesis model,we used this data set to train the Tacotron 2 model.We used it as a pre-trained model to train the Flowtron model.We applied this method to synthesize Vietnamese speech with sadness and happiness.Mean opi-nion score(MOS)assessment results show that MOS is 3.61 for sadness and 3.95 for happiness.In conclusion,the proposed method proves to be more effec-tive for a high degree of automation and fast emotional sentence generation,using a small emotional-speech data set.展开更多
The synthesis of emotional speech has wide applications in the field of human-computer interaction, medicine, industry and so on. In this work, an emotional speech synthesis system is proposed based on prosodic featur...The synthesis of emotional speech has wide applications in the field of human-computer interaction, medicine, industry and so on. In this work, an emotional speech synthesis system is proposed based on prosodic features modification and Time Domain Pitch Synchronous OverLap Add (TD-PSOLA) waveform concatenative algorithm. The system produces synthesized speech with four types of emotion: angry, happy, sad and bored. The experiment results show that the proposed emotional speech synthesis system achieves a good performance. The produced utterances present clear emotional expression. The subjective test reaches high classification accuracy for different types of synthesized emotional speech utterances.展开更多
This paper presents a method of hidden Markov model (HMM)-based Mandarin-Tibetan bi-lingual emotional speech synthesis by speaker adaptive training with a Mandarin emotional speech corpus.A one-speaker Tibetan neutral...This paper presents a method of hidden Markov model (HMM)-based Mandarin-Tibetan bi-lingual emotional speech synthesis by speaker adaptive training with a Mandarin emotional speech corpus.A one-speaker Tibetan neutral speech corpus, a multi-speaker Mandarin neutral speech corpus and a multi-speaker Mandarin emotional speech corpus are firstly employed to train a set of mixed language average acoustic models of target emotion by using speaker adaptive training.Then a one-speaker Mandarin neutral speech corpus or a one-speaker Tibetan neutral speech corpus is adopted to obtain a set of speaker dependent acoustic models of target emotion by using the speaker adap-tation transformation. The Mandarin emotional speech or the Tibetan emotional speech is finally synthesized from Mandarin speaker depen-dent acoustic models of target emotion or Tibetan speaker dependent acoustic models of target emotion. Subjective tests show that the aver-age emotional mean opinion score is 4.14 for Tibetan and 4.26 for Mandarin. The average mean opinion score is 4.16 for Tibetan and 4.28 for Mandarin. The average degradation opinion score is 4.28 for Tibetan and 4.24 for Mandarin. Therefore, the proposed method can synthesize both Tibetan speech and Mandarin speech with high naturalness and emotional expression by using only Mandarin emotional training speech corpus.展开更多
To improve the performance of human-computer interaction interfaces, emotion is considered to be one of the most important factors. The major objective of expressive speech synthesis is to inject various expressions r...To improve the performance of human-computer interaction interfaces, emotion is considered to be one of the most important factors. The major objective of expressive speech synthesis is to inject various expressions reflecting different emotions to the synthesized speech. To effectively model and control the emotion, emotion intensity is introduced for expressive speech synthesis model to generate speech conveyed the delicate and complicate emotional states. The system was composed of an emotion analysis module with the goal of extracting control emotion intensity vector and a speech synthesis module responsible for mapping text characters to speech waveform. The proposed continuous variable “perception vector” is a data-driven approach of controlling the model to synthesize speech with different emotion intensities. Compared with the system using a one-hot vector to control emotion intensity, this model using perception vector is able to learn the high-level emotion information from low-level acoustic features. In terms of the model controllability and flexibility, both the objective and subjective evaluations demonstrate perception vector outperforms one-hot vector.展开更多
Due to the drawbacks in Support Vector Machine(SVM)parameter optimization,an Improved Shuffled Frog Leaping Algorithm(Im-SFLA)was proposed,and the learning ability in practical speech emotion recognition was impro...Due to the drawbacks in Support Vector Machine(SVM)parameter optimization,an Improved Shuffled Frog Leaping Algorithm(Im-SFLA)was proposed,and the learning ability in practical speech emotion recognition was improved.Firstly,we introduced Simulated Annealing(SA),Immune Vaccination(Iv),Gaussian mutation and chaotic disturbance into the basic SFLA,which bManced the search efficiency and population diversity effectively.Secondly,Im-SFLA Was applied to the optimization of SVM parameters,and an Im-SFLA-SVM method Was proposed.Thirdly,the acoustic features of practical speech emotion,such aS ridgetiness,were analyzed.The pitch frequency,short-term energy,formant frequency and chaotic characteristics were analyzed corresponding to different emotion categories,and we constructed a 144-dimensional emotion feature vector for recognition and reduced to 4-dimension by adopting Linear Discriminant Analysis(LDA) Finally,the Im-SFLA-SVM method Was tested on the practical speech emotion database,and the recognition results were compared with Shuffled Frog Leaping Algorithm optimization-SVM(SFLA-SVM)method,Particle Swarm Optimization algorithm optimization-SVM(PSo-SVM) method,basic SVM,Gaussian Mixture Model(GMM)method and Back Propagation(BP)neural network method.The experimentM resuits showed that the average recognition rate of Im-SFLA-SVM method was 77.8%,which had improved 1.7%,2.7%,3.4%,4.7%and 7.8%respectively,compared with the other methods.The recognition of fidgetiness was significantly improve,thus verifying that Im-SFLA was an effective SVM parameter selection method,and the Im-SFLA-SVM method may significantly improve the practical speech emotion recognition.展开更多
基金funded by the Hanoi University of Science and Technology(HUST)under grant number T2018-PC-210.
文摘In recent years,speech synthesis systems have allowed for the produc-tion of very high-quality voices.Therefore,research in this domain is now turning to the problem of integrating emotions into speech.However,the method of con-structing a speech synthesizer for each emotion has some limitations.First,this method often requires an emotional-speech data set with many sentences.Such data sets are very time-intensive and labor-intensive to complete.Second,training each of these models requires computers with large computational capabilities and a lot of effort and time for model tuning.In addition,each model for each emotion failed to take advantage of data sets of other emotions.In this paper,we propose a new method to synthesize emotional speech in which the latent expressions of emotions are learned from a small data set of professional actors through a Flow-tron model.In addition,we provide a new method to build a speech corpus that is scalable and whose quality is easy to control.Next,to produce a high-quality speech synthesis model,we used this data set to train the Tacotron 2 model.We used it as a pre-trained model to train the Flowtron model.We applied this method to synthesize Vietnamese speech with sadness and happiness.Mean opi-nion score(MOS)assessment results show that MOS is 3.61 for sadness and 3.95 for happiness.In conclusion,the proposed method proves to be more effec-tive for a high degree of automation and fast emotional sentence generation,using a small emotional-speech data set.
文摘The synthesis of emotional speech has wide applications in the field of human-computer interaction, medicine, industry and so on. In this work, an emotional speech synthesis system is proposed based on prosodic features modification and Time Domain Pitch Synchronous OverLap Add (TD-PSOLA) waveform concatenative algorithm. The system produces synthesized speech with four types of emotion: angry, happy, sad and bored. The experiment results show that the proposed emotional speech synthesis system achieves a good performance. The produced utterances present clear emotional expression. The subjective test reaches high classification accuracy for different types of synthesized emotional speech utterances.
文摘This paper presents a method of hidden Markov model (HMM)-based Mandarin-Tibetan bi-lingual emotional speech synthesis by speaker adaptive training with a Mandarin emotional speech corpus.A one-speaker Tibetan neutral speech corpus, a multi-speaker Mandarin neutral speech corpus and a multi-speaker Mandarin emotional speech corpus are firstly employed to train a set of mixed language average acoustic models of target emotion by using speaker adaptive training.Then a one-speaker Mandarin neutral speech corpus or a one-speaker Tibetan neutral speech corpus is adopted to obtain a set of speaker dependent acoustic models of target emotion by using the speaker adap-tation transformation. The Mandarin emotional speech or the Tibetan emotional speech is finally synthesized from Mandarin speaker depen-dent acoustic models of target emotion or Tibetan speaker dependent acoustic models of target emotion. Subjective tests show that the aver-age emotional mean opinion score is 4.14 for Tibetan and 4.26 for Mandarin. The average mean opinion score is 4.16 for Tibetan and 4.28 for Mandarin. The average degradation opinion score is 4.28 for Tibetan and 4.24 for Mandarin. Therefore, the proposed method can synthesize both Tibetan speech and Mandarin speech with high naturalness and emotional expression by using only Mandarin emotional training speech corpus.
基金the results of the research project funded by Natural Science Foundation of Hebei University of Economics and Business (No. 2016KYQ05).
文摘To improve the performance of human-computer interaction interfaces, emotion is considered to be one of the most important factors. The major objective of expressive speech synthesis is to inject various expressions reflecting different emotions to the synthesized speech. To effectively model and control the emotion, emotion intensity is introduced for expressive speech synthesis model to generate speech conveyed the delicate and complicate emotional states. The system was composed of an emotion analysis module with the goal of extracting control emotion intensity vector and a speech synthesis module responsible for mapping text characters to speech waveform. The proposed continuous variable “perception vector” is a data-driven approach of controlling the model to synthesize speech with different emotion intensities. Compared with the system using a one-hot vector to control emotion intensity, this model using perception vector is able to learn the high-level emotion information from low-level acoustic features. In terms of the model controllability and flexibility, both the objective and subjective evaluations demonstrate perception vector outperforms one-hot vector.
基金supported by the National Nature Science Foundation(61231002,61273266,51075068)the Doctoral Fund of Ministry of Education of China(20110092130004)+1 种基金the Postdoctoral Fund of Ministry of Education of China(2012M520973)the Open Research Foundation of Key Laboratory(B) of Underwater Acoustic Signal Processing of Ministry of Education of Southeast University under Grant(UASP1202)
文摘Due to the drawbacks in Support Vector Machine(SVM)parameter optimization,an Improved Shuffled Frog Leaping Algorithm(Im-SFLA)was proposed,and the learning ability in practical speech emotion recognition was improved.Firstly,we introduced Simulated Annealing(SA),Immune Vaccination(Iv),Gaussian mutation and chaotic disturbance into the basic SFLA,which bManced the search efficiency and population diversity effectively.Secondly,Im-SFLA Was applied to the optimization of SVM parameters,and an Im-SFLA-SVM method Was proposed.Thirdly,the acoustic features of practical speech emotion,such aS ridgetiness,were analyzed.The pitch frequency,short-term energy,formant frequency and chaotic characteristics were analyzed corresponding to different emotion categories,and we constructed a 144-dimensional emotion feature vector for recognition and reduced to 4-dimension by adopting Linear Discriminant Analysis(LDA) Finally,the Im-SFLA-SVM method Was tested on the practical speech emotion database,and the recognition results were compared with Shuffled Frog Leaping Algorithm optimization-SVM(SFLA-SVM)method,Particle Swarm Optimization algorithm optimization-SVM(PSo-SVM) method,basic SVM,Gaussian Mixture Model(GMM)method and Back Propagation(BP)neural network method.The experimentM resuits showed that the average recognition rate of Im-SFLA-SVM method was 77.8%,which had improved 1.7%,2.7%,3.4%,4.7%and 7.8%respectively,compared with the other methods.The recognition of fidgetiness was significantly improve,thus verifying that Im-SFLA was an effective SVM parameter selection method,and the Im-SFLA-SVM method may significantly improve the practical speech emotion recognition.