摘要
Speech emotion recognition(SER)is an important research problem in human-computer interaction systems.The representation and extraction of features are significant challenges in SER systems.Despite the promising results of recent studies,they generally do not leverage progressive fusion techniques for effective feature representation and increasing receptive fields.To mitigate this problem,this article proposes DeepCNN,which is a fusion of spectral and temporal features of emotional speech by parallelising convolutional neural networks(CNNs)and a convolution layer-based transformer.Two parallel CNNs are applied to extract the spectral features(2D-CNN)and temporal features(1D-CNN)representations.A 2D-convolution layer-based transformer module extracts spectro-temporal features and concatenates them with features from parallel CNNs.The learnt low-level concatenated features are then applied to a deep framework of convolutional blocks,which retrieves high-level feature representation and subsequently categorises the emotional states using an attention gated recurrent unit and classification layer.This fusion technique results in a deeper hierarchical feature representation at a lower computational cost while simultaneously expanding the filter depth and reducing the feature map.The Berlin Database of Emotional Speech(EMO-BD)and Interactive Emotional Dyadic Motion Capture(IEMOCAP)datasets are used in experiments to recognise distinct speech emotions.With efficient spectral and temporal feature representation,the proposed SER model achieves 94.2%accuracy for different emotions on the EMO-BD and 81.1%accuracy on the IEMOCAP dataset respectively.The proposed SER system,DeepCNN,outperforms the baseline SER systems in terms of emotion recognition accuracy on the EMO-BD and IEMOCAP datasets.
基金
Biotechnology and Biological Sciences Research Council,Grant/Award Number:RM32G0178B8
MRC,Grant/Award Number:MC_PC_17171
Royal Society,Grant/Award Number:RP202G0230
BHF,Grant/Award Number:AA/18/3/34220
Hope Foundation for Cancer Research,Grant/Award Number:RM60G0680
GCRF,Grant/Award Number:P202PF11
Sino-UK Industrial Fund,Grant/Award Number:RP202G0289
LIAS,Grant/Award Numbers:P202ED10,P202RE969
Data Science Enhancement Fund,Grant/Award Number:P202RE237
Fight for Sight,Grant/Award Number:24NN201
Sino-UK Education Fund,Grant/Award Number:OP202006。