vits2_pytorch icon indicating copy to clipboard operation
vits2_pytorch copied to clipboard

Export to JIT script

Open OnceJune opened this issue 2 years ago • 4 comments

Hi, I tried to export the model to JIT script but got this error: reversed(Tensor 0) -> (Tensor 0): Expected a value of type 'Tensor' for argument '0' but instead found type 'torch.torch.nn.modules.container.ModuleList Seems JIT doesn't not support reversed, how to solve this?

OnceJune avatar Sep 08 '23 01:09 OnceJune

I also need the model, you must democratize AI as not everybody got money to spend on GPU's. I can't train it, please release a free SOTA model so taht we can make TTS industrially.

spacewalkingninja avatar Apr 18 '23 16:04 spacewalkingninja

To be fair, even if they do release it. I don't think it can be run on anything less than a 4090. And if you have a 4090, you can train it yourself.

nivibilla avatar Apr 18 '23 17:04 nivibilla

Then again, we could try use BitsandBytes

nivibilla avatar Apr 18 '23 20:04 nivibilla

I also need the model, you must democratize AI as not everybody got money to spend on GPU's. I can't train it, please release a free SOTA model so taht we can make TTS industrially.

They 'must' do nothing honestly. Rather be thankful that this is implemented and released under an open source license already.

Also there is a model that has been shared here: https://github.com/lifeiteng/vall-e/issues/58#issuecomment-1483700593

RuntimeRacer avatar Apr 26 '23 01:04 RuntimeRacer

True. But still, a full pretrained model would be nice.

nivibilla avatar Apr 26 '23 06:04 nivibilla

I can try to train it. But I have some queries:

  • How much memory and time can it take to train the model?
  • Can I mix languages such as French, Spanish, and English from Common Voice?

RahulBhalley avatar Apr 30 '23 09:04 RahulBhalley

@RahulBhalley The author of the repo said he trained it on a single gpu(so something like a 3090?). In terms of time I'm not sure. But #58 said he trained it for 4 days on 8 A100s. And the dataset for that was LibriTTS(600 hours), whereas the original Vall-E was trained on 60,000 hours. But I'm not sure how @lifeiteng managed to reproduce the paper's results so quickly only using a single GPU, @lifeiteng could you let us know how long it took.

In terms of mixing languages. I don't see why not as long as the training data is processed into phonemes. But this may affect performance. I would suggest maybe train it in English first so that you can reproduce the original results. Then maybe finetune it for other languages using LoRA?

nivibilla avatar Apr 30 '23 10:04 nivibilla

Thanks for the suggestions!

@RahulBhalley The author of the repo said he trained it on a single gpu(so something like a 3090?). In terms of time I'm not sure. But https://github.com/lifeiteng/vall-e/issues/58 said he trained it for 4 days on 8 A100s. But I'm not sure how @lifeiteng managed to reproduce the paper's results so quickly only using a single GPU, @lifeiteng could you let us know how long it took.

Maybe I'll try 4090 or something. About A100, 80GB VRAM seems like an overkill. I'd rather try A100 40GB or 4090 (whichever turns out to be faster). That'll be very helpful to know @lifeiteng's experience with training on single GPU.

And the dataset for that was LibriTTS(600 hours), whereas the original Vall-E was trained on 60,000 hours.

Common Voice has 3,209 hours (2,429 hours validated) and 86,942 speakers! Although the hours are a lot less than LibriLight but the diversity of voices is 12.4x more! I think it'll be able to do better unseen voice cloning.

In terms of mixing languages. I don't see why not as long as the training data is processed into phonemes. But this may affect performance. I would suggest maybe train it in English first so that you can reproduce the original results. Then maybe finetune it for other languages using LoRA?

Sure. I don't know much about LoRA yet (got to know about it when DreamBooth-ing earlier). Still have to read the paper.

Then again, we could try use BitsandBytes

This is gonna speed up the training by a lot! But looking at re-implementation of Adam as scaled Adam https://github.com/lifeiteng/vall-e/blob/27c0667e0c3a18fd9074863a88a88094c6edbb2d/valle/modules/optim.py#L129 I'm not sure how to use bnb.optim.AdamW8bit instead.

RahulBhalley avatar Apr 30 '23 12:04 RahulBhalley

@RahulBhalley using bnb is super easy

Import bitsandbytes as bnb 
optim_g = bnb.optim.AdamW(...)

You can use it as a drop in replacement.

nivibilla avatar Apr 30 '23 12:04 nivibilla

I used it for finetuning vits and it saved me almost 3gb of vram

https://github.com/nivibilla/efficient-vits-finetuning

nivibilla avatar Apr 30 '23 12:04 nivibilla

Look that this statement.

https://github.com/lifeiteng/vall-e/blob/27c0667e0c3a18fd9074863a88a88094c6edbb2d/valle/bin/trainer.py#L148

It uses ScaledAdam (a custom implementation from scratch).

https://github.com/lifeiteng/vall-e/blob/27c0667e0c3a18fd9074863a88a88094c6edbb2d/valle/modules/optim.py#L129

But scaled Adam doesn't exist in bitsandbytes. I don't know if using AdamW from bitsandbytes will converge VALL-E less.

I think I'll go with AdamW from bitsandbytes.

RahulBhalley avatar Apr 30 '23 12:04 RahulBhalley

I see. I mean see how much vram you save. If it's only something like 3gb. Is it really worth? The point of using 8bit optimisers is mainly for finetuning so we can fit a bigger batch size. If it's not causing OOM maybe bitsandbytes isn't needed

nivibilla avatar Apr 30 '23 12:04 nivibilla

Have you had a look at the other Vall-E implementation? It uses Deepspeed.

nivibilla avatar Apr 30 '23 12:04 nivibilla

Btw the original paper used AdamW

nivibilla avatar Apr 30 '23 12:04 nivibilla

Also for multiple languages, Vall-E X exists but no implementation. And Natural Speech 2 seems very promising. But implementation is on the way by lucidrains

nivibilla avatar Apr 30 '23 12:04 nivibilla

I see. I mean see how much vram you save. If it's only something like 3gb. Is it really worth? The point of using 8bit optimisers is mainly for finetuning so we can fit a bigger batch size. If it's not causing OOM maybe bitsandbytes isn't needed

Okay, I thought it'll speed up the training. 🤔 I should put the model on fp16 data type instead.

Have you had a look at the other Vall-E implementation? It uses Deepspeed.

I don't mind the VRAM. Just wanted to speed up training.

Btw the original paper used AdamW

Cool. Will try that.

Also for multiple languages, Vall-E X exists but no implementation. And Natural Speech 2 seems very promising. But implementation is on the way by lucidrains

Wow! The samples are incredible. Reminds of NANSY++ where Jay-Z raps the lyrics of Nas song.

RahulBhalley avatar Apr 30 '23 12:04 RahulBhalley

Yeah Natural Speech 2 is amazing. I'm keeping a close eye on the implementation by lucidrains. It's being sponsored by Stability so if we're lucky he may provide a pretrained model

nivibilla avatar Apr 30 '23 12:04 nivibilla

Btw if you want some more insight, there is a long thread here about mrq training vall e https://git.ecker.tech/mrq/ai-voice-cloning/issues/152

Apparently on a 4070 ti lol

nivibilla avatar Apr 30 '23 13:04 nivibilla

Haha, okay.

RahulBhalley avatar Apr 30 '23 13:04 RahulBhalley

In case anyone is interested in trying with CommonVoice also, I just created a PR for a CV dataset preparation script I created: https://github.com/lifeiteng/vall-e/pull/111

RuntimeRacer avatar May 01 '23 17:05 RuntimeRacer

This is nice @RuntimeRacer . Do you know what the differences are from the paper?

nivibilla avatar May 01 '23 18:05 nivibilla

@nivibilla As far as I am aware they only trained the Model with LibriLight in the Paper, which consisted of 60k hours pure english speech. I didn't do a breakdown of hours per language for languages I downloaded, but you can check this here, it is using CommonVoice 13: https://commonvoice.mozilla.org/en/datasets

I was more curious to see if the model is actually capable to learn speech-related dialects and apply these also on different languages when for example using a japanese speaker + text as a prompt but letting it generate audio for an english sentence, which I found https://github.com/serp-ai/bark-with-voice-clone being capable of, however that model proved to be not very robust on this task (or cloning voices from arbitrary samples in general).

RuntimeRacer avatar May 01 '23 18:05 RuntimeRacer

@RuntimeRacer I see. Yeah I would try train it myself if I can. I'm thinking of maybe adapting the code to Deepspeed/accelerate so that I can do nvme offloading. It will take a painstakingly long time but at least I can train. But it will converge so slowly

nivibilla avatar May 01 '23 18:05 nivibilla

@nivibilla Yes having Accelerator for this repo would be awesome; I was considering looking into that myself; but since I am pretty busy these days I'm currently rather hoping that folks here will be able to fix the multi GPU issues faster than migrating the training code 😅 At least it seems someone is aware now after I highlighted there's still an issue: https://github.com/lifeiteng/vall-e/issues/86

RuntimeRacer avatar May 01 '23 20:05 RuntimeRacer

Tensorboard of my CV Training so far - It didn't even iterate through the first epoch yet: Tensorboard CV

RuntimeRacer avatar May 01 '23 20:05 RuntimeRacer

@RuntimeRacer those are some Interesting graphs. I wonder why it converges so quickly.

nivibilla avatar May 01 '23 20:05 nivibilla

Train goes down and valid goes up indicates overfitting but surely the dataset is not that small

nivibilla avatar May 01 '23 20:05 nivibilla

To be fair it's almost 300k steps. Did you try inference?

nivibilla avatar May 01 '23 20:05 nivibilla

That's my commandline used for training:

python3 bin/trainer.py --max-duration 60 --filter-min-duration 0.5 --filter-max-duration 14 --train-stage 1 --num-buckets 6 --dtype bfloat16 --save-every-n 5000 --valid-interval 5000 --model-name valle --share-embedding true --norm-first true --add-prenet false --decoder-dim 1024 --nhead 16 --num-decoder-layers 12 --prefix-mode 1 --base-lr 0.05 --warmup-steps 200 --average-period 0 --num-epochs 200 --start-epoch 1 --start-batch 160000 --accumulate-grad-steps 4 --world-size 1 --exp-dir exp/valle

Regarding overfitting, I had to change --max-duration to 60 form 80 at 160k steps because I hit an OOM error with the dataset loader. Not sure if that's the case here, but I experienced 'movements' in loss bias with changed dataset sizes and world sizes also with other models I trained previously, so that might explain the upshift in loss mean at 160k.

Regarding the valid loss jumps I assume the model still needs to generalize on new tokens or speaker characteristics because it didn't loop through a whole epoch yet. But that's just blind guessing from a dev with no theoretical background in data science.

RuntimeRacer avatar May 01 '23 20:05 RuntimeRacer

To be fair it's almost 300k steps. Did you try inference?

No, I understood I need to train AR model until I reach a definite minimum for valid loss, and then train the NAR model based on that checkpoint first. But I can try ofc and see whether it gives us anything except static

RuntimeRacer avatar May 01 '23 20:05 RuntimeRacer