EMOPIA
EMOPIA copied to clipboard
Emotional conditioned music generation using transformer-based model.
Hi, I read the paper related to this repo and was curious about the subjective-metrics. Were the random 4 songs for each model in subjective metrics randomly chose from 400...
Hi, when I used your training code, I found there was something I didn't understand during model forwarding. During training process, the model firstly predicts the family token (y_type), and...
The baseline folder has many .py files, how to run baseline using your repo? I found the baseline's implementation need data/train and data/val, how can I get these and run...
Hi, I has some questions about the pretrained dataset (ailabs) and the pretrained models. 1. On the main page of this repo, I found a link that provided pretrained models....
Greetings, Thank you for makng this code public. I am currently trying to use the code for generating three different kinds of music (sad/dipressive, joyful/happy or neutral music). After using...
Hello author, thank you very much for your work. In the "data set analysis", how to extract the note density in the midi file to generate a violin diagram, is...
Hi there, Thank you for your effort in putting together this dataset. The paper mentions 1087 clips but the midi folder contains only 1078/1071 files (in v2.2). Could you please...