期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
第18届上海国际纺织工业展览会电脑横机述评 被引量:2
1
作者 安虹 宋广礼 《针织工业》 2017年第12期12-17,共6页
介绍第18届上海国际纺织工业展览会参展电脑横机的厂商和设备情况。概述参展电脑横机的技术特点,包括:机器效率在高速和短机头两个方向上有所提高,国产设备的同质化有所改观,电脑横机应用领域不断扩展,工艺和制版软件的智能化有所突破... 介绍第18届上海国际纺织工业展览会参展电脑横机的厂商和设备情况。概述参展电脑横机的技术特点,包括:机器效率在高速和短机头两个方向上有所提高,国产设备的同质化有所改观,电脑横机应用领域不断扩展,工艺和制版软件的智能化有所突破。从技术角度对电脑横机的进一步发展提出几点建议:设备制造商应从价格竞争向品质提高和品种创新方向发展,在产品应用领域上进一步拓展,在智能化水平上进一步提高。 展开更多
关键词 展览会 纺织机械 电脑横机 技术特点 短机头 电脑鞋面机 智能化
下载PDF
Multi-head attention-based long short-term memory model for speech emotion recognition 被引量:1
2
作者 Zhao Yan Zhao Li +3 位作者 Lu Cheng Li Sunan Tang Chuangao Lian Hailun 《Journal of Southeast University(English Edition)》 EI CAS 2022年第2期103-109,共7页
To fully make use of information from different representation subspaces,a multi-head attention-based long short-term memory(LSTM)model is proposed in this study for speech emotion recognition(SER).The proposed model ... To fully make use of information from different representation subspaces,a multi-head attention-based long short-term memory(LSTM)model is proposed in this study for speech emotion recognition(SER).The proposed model uses frame-level features and takes the temporal information of emotion speech as the input of the LSTM layer.Here,a multi-head time-dimension attention(MHTA)layer was employed to linearly project the output of the LSTM layer into different subspaces for the reduced-dimension context vectors.To provide relative vital information from other dimensions,the output of MHTA,the output of feature-dimension attention,and the last time-step output of LSTM were utilized to form multiple context vectors as the input of the fully connected layer.To improve the performance of multiple vectors,feature-dimension attention was employed for the all-time output of the first LSTM layer.The proposed model was evaluated on the eNTERFACE and GEMEP corpora,respectively.The results indicate that the proposed model outperforms LSTM by 14.6%and 10.5%for eNTERFACE and GEMEP,respectively,proving the effectiveness of the proposed model in SER tasks. 展开更多
关键词 speech emotion recognition long short-term memory(LSTM) multi-head attention mechanism frame-level features self-attention
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部