期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
Transformer-like model with linear attention for speech emotion recognition 被引量:3
1
作者 Du Jing Tang Manting Zhao Li 《Journal of Southeast University(English Edition)》 EI CAS 2021年第2期164-170,共7页
Because of the excellent performance of Transformer in sequence learning tasks,such as natural language processing,an improved Transformer-like model is proposed that is suitable for speech emotion recognition tasks.T... Because of the excellent performance of Transformer in sequence learning tasks,such as natural language processing,an improved Transformer-like model is proposed that is suitable for speech emotion recognition tasks.To alleviate the prohibitive time consumption and memory footprint caused by softmax inside the multihead attention unit in Transformer,a new linear self-attention algorithm is proposed.The original exponential function is replaced by a Taylor series expansion formula.On the basis of the associative property of matrix products,the time and space complexity of softmax operation regarding the input's length is reduced from O(N2)to O(N),where N is the sequence length.Experimental results on the emotional corpora of two languages show that the proposed linear attention algorithm can achieve similar performance to the original scaled dot product attention,while the training time and memory cost are reduced by half.Furthermore,the improved model obtains more robust performance on speech emotion recognition compared with the original Transformer. 展开更多
关键词 TRANSFORMER attention mechanism speech emotion recognition fast softmax
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部