摘要
文中提出了一种将音频与歌词两种模态结合并利用深度置信网络进行音乐情感分类的方法。在分类器的选择上,将传统的分类器用DBN进行了替换,且改进了子任务结合晚融合法(LFSM)来完成多模态的融合,并验证了该方法的可行性。实验结果表明,该方法对音乐情感分类效果较好,高于基于单一模态和传统分类器的分类方法。
This paper presents a method of music emotion classification which combines audio and lyric modes and uses depth belief network.In the selection of classifier,the traditional classifier is replaced by DBN.The LateFusion Sub-taskMerging(LFSM)is improved to complete multi-modal fusion.The feasibility of this method is already verified.The experimental results show that the method is more effective than the one based on single modal classification and the traditional classification method.
作者
赵勇飞
王宇
周义凯
袁燕
ZHAO Yong-fei;WANG Yu;ZHOU Yi-kai;YUAN Yan(School of Computer and Information,Hohai University,Nanjing 211100,China)
出处
《信息技术》
2019年第2期102-106,110,共6页
Information Technology
关键词
音乐情感分类
向量空间模型
潜在语义分析
多模态融合
DBN
music emotion classification
vector space model
latent semantic analysis
multi-modal fusion
DBN