期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Multi-style Chord Music Generation Based on Artificial Neural Network
1
作者 郁进明 陈壮 海涵 《Journal of Donghua University(English Edition)》 CAS 2023年第4期428-437,共10页
With the continuous development of deep learning and artificial neural networks(ANNs), algorithmic composition has gradually become a hot research field. In order to solve the music-style problem in generating chord m... With the continuous development of deep learning and artificial neural networks(ANNs), algorithmic composition has gradually become a hot research field. In order to solve the music-style problem in generating chord music, a multi-style chord music generation(MSCMG) network is proposed based on the previous ANN for creation. A music-style extraction module and a style extractor are added by the network on the original basis;the music-style extraction module divides the entire music content into two parts, namely the music-style information Mstyleand the music content information Mcontent. The style extractor removes the music-style information entangled in the music content information. The similarity of music generated by different models is compared in this paper. It is also evaluated whether the model can learn music composition rules from the database. Through experiments, it is found that the model proposed in this paper can generate music works in the expected style. Compared with the long short term memory(LSTM) network, the MSCMG network has a certain improvement in the performance of music styles. 展开更多
关键词 algorithmic composition artificial neural network(ANN) multi-style chord music generation network
下载PDF
Style-conditioned music generation with Transformer-GANs
2
作者 Weining WANG Jiahui LI +1 位作者 Yifan LI Xiaofen XING 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2024年第1期106-120,共15页
Recently,various algorithms have been developed for generating appealing music.However,the style control in the generation process has been somewhat overlooked.Music style refers to the representative and unique appea... Recently,various algorithms have been developed for generating appealing music.However,the style control in the generation process has been somewhat overlooked.Music style refers to the representative and unique appearance presented by a musical work,and it is one of the most salient qualities of music.In this paper,we propose an innovative music generation algorithm capable of creating a complete musical composition from scratch based on a specified target style.A style-conditioned linear Transformer and a style-conditioned patch discriminator are introduced in the model.The style-conditioned linear Transformer models musical instrument digital interface(MIDI)event sequences and emphasizes the role of style information.Simultaneously,the style-conditioned patch discriminator applies an adversarial learning mechanism with two innovative loss functions to enhance the modeling of music sequences.Moreover,we establish a discriminative metric for the first time,enabling the evaluation of the generated music’s consistency concerning music styles.Both objective and subjective evaluations of our experimental results indicate that our method’s performance with regard to music production is better than the performances encountered in the case of music production with the use of state-of-the-art methods in available public datasets. 展开更多
关键词 music generation Style-conditioned TRANSFORMER music emotion
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部