78Alpha

Results 115 comments of 78Alpha

To run on windows you will need WSL2 and will likely resort to docker, as the normal installation process is highly likely to lead to a failure needing a full...

Doesn't seem too off. I use the bfloat16 and get a usage of 41% with 7.7 GB/ 8 GB VRAM. That's with a 3070 (non-Ti) EDIT: The time, however seems...

> @78Alpha are you using the Mega model? The default for the pip package EDIT: Checking back, it is the Mega Version. Will also try non-mega... Non-mega went to about...

I had been using my own utility script for batch generation [here](https://github.com/78Alpha/PersonalUtilities/blob/main/MindalleBatch/main.py) I have it set to `is_mega=True`

Might also help to clear the memory, input manual garbage collection and VRAM clears. Interrupting a model training leaves it using 66% VRAM so that pressing resume ends in CUDA...

> Not only that, with deepspeed support, it would be possible to reduce it to 8GB by using system memory as welll > > > Would be amazing if we...

> @78Alpha Extension available at https://github.com/d8ahazard/sd_dreambooth_extension Try to keep up, will 'ya ;) I have been using it currently. Only worked in CPU mode. WSL2 gives me the same OOM...

Tested this out myself, with DeepSpeed installed and everything else to crunch the memory down, it does NOT work on 8 GB of VRAM. Others are reporting that as well...

I have tried this out myself, 8 GB GPU and 48 GB RAM, and most others that have tried it have had no success. It will always OOM. Even on...

Should be qmake && make, or qmake Lanshare.pro && make. It might just error.