lsaa

Results 8 comments of lsaa

I'm also experiencing this Done without changing the model or adding any embeds/hypernets/loras to the prompt before first generation: total 17279676K Generations 1: 21838172K 2: 23507288K 3: 23553092K 4: 23647104K...

updated and tried `--no-hashing` it seems better but still runs out of memory. The behavior is slightly different this time: it remains using a constant amount of memory between generations...

Ok so I've trying to pinpoint what causes the memory to jump and I found a few things out. I'm on [ea9bd9f](https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/ea9bd9fc7409109adcd61b897abc2c8881161256) [without-no-hashing.txt](https://github.com/AUTOMATIC1111/stable-diffusion-webui/files/10611930/without-no-hashing.txt) [with-no-hashing.txt](https://github.com/AUTOMATIC1111/stable-diffusion-webui/files/10611931/with-no-hashing.txt) without `--no-hashing` it's better than it...

running without --medvram fixed it for me as well edit: tested it a bit more. Seems to be very stable however I can only generate smaller pics due to not...

> how do you use your VAEs? I might be onto something i put them in the VAE directory and i'm not noticing sudden spikes anymore this could be a...

> I'm starting to think this issue might be related to graphics driver / pytorch / xformers / kernel, this is linux and Nvidia after all. (I'm on 525 latest)...

> I suspect the secret to locating the source of the bug lies in investigating what that XYZ plot script does differently from normal generations. Perhaps it bypasses some stage...

OK so I'm also pretty confident its an issue on the torch backend. I tried this fix out https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/6722 and it's running perfectly. note fore arch users: gperftools is built...