摘要
提出了一种基于相对音高和相对时延表达音乐旋律的模型,这种模型能够更好地反映不同演奏方式下的同一旋律型。基于上述表达模型提出的音乐风格分类算法通过旋律互信息度量音乐风格,同Unigram和Bigram模型的切分算法具有相近的时间复杂度,能够更好地支持具有多种音乐风格的乐曲分类。
A music representation model based on relative pitch and relative time length was proposed. This model well represented the same melody under different performance conditions. Based on the model, a music style classification algorithm was proposed to measure music style through melody Mutual Information. The algorithm's time scale was close to that of syncopation algorithm under Unigram and Bigram models. The algorithm well supported music style classification among multi-styles.
出处
《计算机应用》
CSCD
北大核心
2005年第5期1116-1118,共3页
journal of Computer Applications
关键词
旋律
互信息
切分
melody
mutual information
syncopation