Carlos Mocholí

Results 428 comments of Carlos Mocholí

Yes, the speed degradation is expected as you are trading it off for lower memory requirements

Hi. Can you add the error you got?

You can pass `--precision 16-true` or `--precision 32-true` instead

Do you get errors with other checkpoints? If you have enough system RAM, you could try running one step on CPU. Even though it's very slow, it usually gives a...

What error are you getting? LoRA suport for Falcon is landing with #141. You can install that branch to give it a try

See my reply here https://github.com/Lightning-AI/lit-parrot/issues/140#issuecomment-1590337763 TL;DR: use `--precision bf16-mixed`

Closing, see https://github.com/Lightning-AI/lit-gpt/issues/159#issuecomment-1601122245 for more context on the memory usage

I didn't know about these extended Pythia models. Does the generation look fine without quantization?

Do you know if memory was slowly increasing with the iteration count? or was there just one spike that pushed you over the limit? I just merged #143 which should...

I just merged some improvements to reduce the peak memory usage. Please pull the latest changes. I'll also be adding a guide for dealing with OOMs with #182. Hope this...