For human-machine communication to be as effective as human-tohuman communication, research on speech emotion recognition is essential.Among the models and the classifiers used to recognize emotions, neural networks...For human-machine communication to be as effective as human-tohuman communication, research on speech emotion recognition is essential.Among the models and the classifiers used to recognize emotions, neural networks appear to be promising due to the network’s ability to learn and the diversity in configuration. Following the convolutional neural network, a capsuleneural network (CapsNet) with inputs and outputs that are not scalar quantitiesbut vectors allows the network to determine the part-whole relationships thatare specific 6 for an object. This paper performs speech emotion recognition basedon CapsNet. The corpora for speech emotion recognition have been augmented byadding white noise and changing voices. The feature parameters of the recognition system input are mel spectrum images along with the characteristics of thesound source, vocal tract and prosody. For the German emotional corpus EMODB, the average accuracy score for 4 emotions, neutral, boredom, anger and happiness, is 99.69%. For Vietnamese emotional corpus BKEmo, this score is94.23% for 4 emotions, neutral, sadness, anger and happiness. The accuracy scoreis highest when combining all the above feature parameters, and this scoreincreases significantly when combining mel spectrum images with the featuresdirectly related to the fundamental frequency.展开更多
文摘For human-machine communication to be as effective as human-tohuman communication, research on speech emotion recognition is essential.Among the models and the classifiers used to recognize emotions, neural networks appear to be promising due to the network’s ability to learn and the diversity in configuration. Following the convolutional neural network, a capsuleneural network (CapsNet) with inputs and outputs that are not scalar quantitiesbut vectors allows the network to determine the part-whole relationships thatare specific 6 for an object. This paper performs speech emotion recognition basedon CapsNet. The corpora for speech emotion recognition have been augmented byadding white noise and changing voices. The feature parameters of the recognition system input are mel spectrum images along with the characteristics of thesound source, vocal tract and prosody. For the German emotional corpus EMODB, the average accuracy score for 4 emotions, neutral, boredom, anger and happiness, is 99.69%. For Vietnamese emotional corpus BKEmo, this score is94.23% for 4 emotions, neutral, sadness, anger and happiness. The accuracy scoreis highest when combining all the above feature parameters, and this scoreincreases significantly when combining mel spectrum images with the featuresdirectly related to the fundamental frequency.