StableCascade
StableCascade copied to clipboard
Running LoRA in 24g vram but 30 seconds for single image
Hi, Thanks for the cool models first.
I just want to ask about the vram memory spend durring generation.
I tested with a10 gpu 24 gb vram
but when I checked the memory, it uses only 20 gb of vram
and also the running time is quite slow for me should I run stage c, and stage b all the time?
In the example, generate 4 images spend only 26 seconds
Should I set some flag like --highvram or something? or what gpu should I use for the fast generate as example?
gdf sampler is very slow.
how did you get it running in the first place? I was trying to train a Lora following the examples ( after setting the env variables for SLURMs all to 1, and fixed the - file error for webdataset), the code stuck at the progress bar after loading the models and images... I am using A100, and I can see around 18GiB of RAM is used with no 0% GPU usage.... are you following https://github.com/Stability-AI/StableCascade/blob/master/train/readme.md as well?
I didn't start train yet, I just checked the output of the example :)
@SimonAndMilky +1,Have you solved this problem?
I didn't start train yet, I just checked the output of the example :)
Hi, I use code(https://github.com/Stability-AI/StableCascade/blob/master/inference/lora.ipynb) to generate images by lora, but the following error will be reported.
I understand that this is caused by the model not being downloaded correctly. The code I used to download the model is as follows. Is there any problem with this operation?
download models :bash download_models.sh essential small-small bfloat16
@wen020 Hi , I just run download models all, not the only essential