musicgen_trainer
musicgen_trainer copied to clipboard
simple trainer for musicgen/audiocraft
MusicGen Trainer
This is a trainer for MusicGen model. It's based on this.
Contributors
- @mkualquiera and @neverix: actually got it working
- elyxlz: help with masks
STATUS: MVP
Removing the gradient scaler, increasing the batch size and only training on conditional samples makes training work.
TODO:
- [ ] Add notebook
- [ ] Add webdataset support
- [ ] Try larger models
- [ ] Add LoRA
- [ ] Make rolling generation customizable
Usage
Dataset Creation
Create a folder, in it, place your audio and caption files. They must be .wav and .txt format respectively. You can omit .txt files for training with empty text by setting the --no_label option to 1.

You can use .wav files longer than 30 seconds, in that case the model will be trained on random crops of the original .wav file.
In this example, segment_000.txt contains the caption "jazz music, jobim" for wav file segment_000.wav.
Running the trainer
Run python3 run.py --dataset <PATH_TO_YOUR_DATASET>. Make sure to use the full path to the dataset, not a relative path.
Options
dataset_path: String, path to your dataset with.wavand.txtpairs.model_id: String, MusicGen model to use. Can besmall/medium/large. Default:smalllr: Float, learning rate. Default:0.00001/1e-5epochs: Integer, epoch count. Default:100use_wandb: Integer,1to enable wandb,0to disable it. Default:0= Disabledsave_step: Integer, amount of steps to save a checkpoint. Default: Noneno_label: Integer, whether to read a dataset without.txtfiles. Default:0= Disabledtune_text: Integer, perform textual inversion instead of full training. Default:0= Disabledweight_decay: Float, the weight decay regularization coefficient. Default:0.00001/1e-5grad_acc: Integer, number of steps to smooth gradients over. Default: 2warmup_steps: Integer, amount of steps to slowly increase learning rate over to let the optimizer compute statistics. Default: 16batch_size: Integer, batch size the model sees at once. Reduce to lower memory consumption. Default: 4use_cfg: Integer, whether to train with some labels randomly dropped out. Default:0= Disabled
You can set these options like this: python3 run.py --use_wandb=1.
Models
Once training finishes, the model (and checkpoints) will be available under the models folder in the same path you ran the trainer on.

To load them, simply run the following on your generation script:
model.lm.load_state_dict(torch.load('models/lm_final.pt'))
Where model is the MusicGen Object and models/lm_final.pt is the path to your model (or checkpoint).
Citations
@article{copet2023simple,
title={Simple and Controllable Music Generation},
author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre Défossez},
year={2023},
journal={arXiv preprint arXiv:2306.05284},
}
@mkualquiera (mkualquiera@discord) added batching, debugged the code and trained the first working model.
Special thanks to elyxlz (223864514326560768@discord) for helping @chavinlau with the masks.
@chavinlau wrote the original version of the training code. Original README:
MusicGen Trainer
This is a trainer for MusicGen model. Currently it's very basic but I'll add more features soon.
STATUS: BROKEN
Only works for overfitting. Breaks model on anything else
More information on the current training quality on the experiments section
Usage
Dataset Creation
Create a folder, in it, place your audio and caption files. They must be WAV and TXT format respectively.

Important: Split your audios in 35 second chunks. Only the first 30 seconds will be processed. Audio cannot be less than 30 seconds.
In this example, segment_000.txt contains the caption "jazz music, jobim" for wav file segment_000.wav
Running the trainer
Run python3 run.py --dataset /home/ubuntu/dataset, replace /home/ubuntu/dataset with the path to your dataset. Make sure to use the full path, not a relative path.
Options
dataset_path: String, path to your dataset with WAV and TXT pairs.model_id: String, MusicGen model to use. Can besmall/medium/large. Default:smalllr: Float, learning rate. Default:0.0001/1e-4epochs: Integer, epoch count. Default:5use_wandb: Integer,1to enable wandb,0to disable it. Default:0= Disabledsave_step: Integer, amount of steps to save a checkpoint. Default: None
You can set these options like this: python3 run.py --use_wandb=1
Models
Once training finishes, the model (and checkpoints) will be available under the models folder in the same path you ran the trainer on.

To load them, simply run the following on your generation script:
model.lm.load_state_dict(torch.load('models/lm_final.pt'))
Where model is the MusicGen Object and models/lm_final.pt is the path to your model (or checkpoint).
Experiments
Electronic music (Moe Shop):
Encodec seems to struggle with electronic music. Even just Encoding->Decoding has many problems.
4:00 - 4:30 - Moe Shop - WONDER POP
Original: https://voca.ro/1jbsor6BAyLY
Encode -> Decode: https://voca.ro/1kF2yyGyRn0y
Overfit -> Generate -> Decode: https://voca.ro/1f6ru5ieejJY
Bossa Nova (Tom Jobim):
Softer and less aggressive melodies seem to play best with encodec and musicgen. One of these are bossa nova, which to me sounds great:
1:20 - 1:50 - Tom Jobim - Children's Games
Original: https://voca.ro/1dm9QpRqa5rj (last 5 seconds are ignored)
Encode -> Decode: https://voca.ro/19LpwVE44si7
Overfit -> Generate -> Decode: https://voca.ro/1hJGVdxsvBOG
Citations
@article{copet2023simple,
title={Simple and Controllable Music Generation},
author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre Défossez},
year={2023},
journal={arXiv preprint arXiv:2306.05284},
}
Special thanks to elyxlz (223864514326560768@discord) for helping me with the masks.