RahulBhalley

Results 122 comments of RahulBhalley

I have access to an RTX 4090. Only problem is inference in production environments. This repo's Tortoise inference speed is sadly insanely slow!

It's been a long time. Any progress on this?

I can try to train it. But I have some queries: - How much memory and time can it take to train the model? - Can I mix languages such...

Thanks for the suggestions! > @RahulBhalley The author of the repo said he trained it on a single gpu(so something like a 3090?). In terms of time I'm not sure....

Look that this statement. https://github.com/lifeiteng/vall-e/blob/27c0667e0c3a18fd9074863a88a88094c6edbb2d/valle/bin/trainer.py#L148 It uses `ScaledAdam` (a custom implementation from scratch). https://github.com/lifeiteng/vall-e/blob/27c0667e0c3a18fd9074863a88a88094c6edbb2d/valle/modules/optim.py#L129 But scaled Adam doesn't exist in `bitsandbytes`. I don't know if using AdamW from `bitsandbytes` will...

> I see. I mean see how much vram you save. If it's only something like 3gb. Is it really worth? The point of using 8bit optimisers is mainly for...

Why GH closed it? It should remain open until solved.

We might instead close it as now Apple has released SD with CoreML.