open-musiclm
open-musiclm copied to clipboard
Implementation of MusicLM, a text to music model published by Google Research, with a few modifications.
does your model generate music with words?
I am writing here because the discord invite in the README.md is invalid. I am not sure I am doing this "right". Using the dataset provided on Google Drive and...
(open-musiclm) G:\Learn\AmateurLearning\AI\Practice\open-musiclm-main>python ./scripts/train_clap_rvq.py --results_folder ./results/clap_rvq --model_config ./configs/model/musiclm_small.json --training_config ./configs/training/train_musiclm_fma.json loading clap... Some weights of the model checkpoint at roberta-base were not used when initializing RobertaModel: ['lm_head.decoder.weight', 'lm_head.dense.weight', 'lm_head.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias',...
I'm not familiar with the music domain, are there any open-source datasets available for use?
I just run the infer file and got this eror: ``` Traceback (most recent call last): File "/workspace/OPEN-MUSICLM/scripts/infer.py", line 66, in musiclm = create_musiclm_from_config( File "/workspace/OPEN-MUSICLM/scripts/../open_musiclm/config.py", line 442, in create_musiclm_from_config...
The Readme states that the CLAP model uses the LAION-Audio-630K dataset, however in the repo I can only find a reference to the FMA (Free Music Archive) dataset. Is there...
It seems that in the ClapRVQTrainer code, you donot use any gradient backward? How to understood this?
I trained with different prompt with your pretrained model. I felt that it could only generate melody, but could not generate songs with lyrics.
do we need to prepare training data?