fast-stable-diffusion
fast-stable-diffusion copied to clipboard
v2 problem, help plz
every time i run v2 it always stop at this point the system ram shoots through the roof and it look like this
LatentDiffusion: Running in v-prediction mode Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads. Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads. Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads. Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads. Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads. Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 1024 and using 10 heads. Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads. Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 1024 and using 10 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 1024 and using 20 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 1024 and using 20 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 1024 and using 20 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 1024 and using 20 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 1024 and using 20 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 1024 and using 20 heads. Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads. Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 1024 and using 10 heads. Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads. Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 1024 and using 10 heads. Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads. Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 1024 and using 10 heads. Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads. Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads. Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads. Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads. Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads. Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads. DiffusionWrapper has 865.91 M params. making attention of type 'vanilla-xformers' with 512 in_channels building MemoryEfficientAttnBlock with 512 in_channels... Working with z of shape (1, 4, 32, 32) = 4096 dimensions. making attention of type 'vanilla-xformers' with 512 in_channels building MemoryEfficientAttnBlock with 512 in_channels... ^C
I hate saying I have this problem, too. But I do.
I am unable to use the UI now! I am NOT thankful for this!
Same - before it's said, I am on Pro+
same!
Yeah, unfortunately I'm getting something along these lines as well.
free colab.
Free colab doesn't offer enough RAM to run this model, but I'm sure a solution will be available soon
Hi I’m not running free. I have tried it on premium ram with the same error, heads up.
On Fri, Nov 25, 2022 at 2:27 AM, Ben @.***> wrote:
Free colab doesn't offer enough RAM to run this model, but I'm sure a solution will be available soon
— Reply to this email directly, view it on GitHub https://github.com/TheLastBen/fast-stable-diffusion/issues/612#issuecomment-1327106912, or unsubscribe https://github.com/notifications/unsubscribe-auth/A4A3AP7S6O34IUPZFWUZYHDWKBS5DANCNFSM6AAAAAASKX2ZN4 . You are receiving this because you commented.Message ID: @.***>
confirmed. I'm oon a high ram option and i get the same message :(
I am wondering, the v2 is installed into '/content/gdrive/MyDrive/sd/stablediffusion', while v1 is installed into '/content/gdrive/MyDrive/sd/stable-diffusion'. Is that intentional? Not sure if it's connected :) (probably not, haven't tried changing it, though)
I hate saying I have this problem, too. But I do.
I am unable to use the UI now! I am NOT thankful for this!
Just switch back to v1 and it should work normally.
Hi I’m not running free. I have tried it on premium ram with the same error, heads up.
@archimedesinstitute In Colab : Runtime-> Change runtime type -> Runtime shape -> High-RAM
Keep the GPU Class to standard to save compute units
@ClemensLode the repo for the v2 is different from the v1.5 "stable-diffusion" is for the v1 and "stablediffusion" is for v2
Hi! I can confirm I have tried this on the premium runtime. It is not a fix.
On Fri, Nov 25, 2022 at 7:03 AM, Ben @.***> wrote:
Hi I’m not running free. I have tried it on premium ram with the same error, heads up.
In Colab : Runtime-> Change runtime type -> Runtime shape -> High-RAM
Keep the GPU Class to standard to save compute units
— Reply to this email directly, view it on GitHub https://github.com/TheLastBen/fast-stable-diffusion/issues/612#issuecomment-1327392048, or unsubscribe https://github.com/notifications/unsubscribe-auth/A4A3APYMGIPJUU3WGUF3GZDWKCTI7ANCNFSM6AAAAAASKX2ZN4 . You are receiving this because you commented.Message ID: @.***>
As well as high ram with and without premium.
On Fri, Nov 25, 2022 at 7:03 AM, Ben @.***> wrote:
Hi I’m not running free. I have tried it on premium ram with the same error, heads up.
In Colab : Runtime-> Change runtime type -> Runtime shape -> High-RAM
Keep the GPU Class to standard to save compute units
— Reply to this email directly, view it on GitHub https://github.com/TheLastBen/fast-stable-diffusion/issues/612#issuecomment-1327392048, or unsubscribe https://github.com/notifications/unsubscribe-auth/A4A3APYMGIPJUU3WGUF3GZDWKCTI7ANCNFSM6AAAAAASKX2ZN4 . You are receiving this because you commented.Message ID: @.***>
@archimedesinstitute copy the error log when you use High-Ram setting
LatentDiffusion: Running in v-prediction mode
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 1024 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 1024 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 1024 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 1024 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 1024 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 1024 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 1024 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 1024 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 1024 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 1024 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 1024 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads.
DiffusionWrapper has 865.91 M params.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Loading weights [a2a802b2] from /content/gdrive/MyDrive/sd/stable-diffusion-webui/models/Stable-diffusion/Copy of wojtunia.ckpt
Traceback (most recent call last):
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/webui.py", line 207, in
@outhipped if you're using a 1.5 model, you need to set it to 1.5 in the download model cell and rerun the requirements cell
Edit:
OK, setting it to High-RAM and deleting the 1.5 ckpt file did the trick for me and I was able to create some images. Awesome, thanks!
It feels... different, though, haven't been able to create any better looking images yet :)
@ClemensLode the v2 is simply awful, check the subreddit to get the idea https://www.reddit.com/r/StableDiffusion/
Indeed! But it's good to keep up with development. Thanks for the effort!
Yeah I’m not really stoked on it until there’s a clear path for finetuning on the model.
On Fri, Nov 25, 2022 at 1:05 PM, Ben @.***> wrote:
@ClemensLode https://github.com/ClemensLode the v2 is simply awful, check the subreddit to get the idea https://www.reddit.com/r/StableDiffusion/
— Reply to this email directly, view it on GitHub https://github.com/TheLastBen/fast-stable-diffusion/issues/612#issuecomment-1327758939, or unsubscribe https://github.com/notifications/unsubscribe-auth/A4A3AP7G3KMCU2U5BQ7ZMKTWKD5UXANCNFSM6AAAAAASKX2ZN4 . You are receiving this because you were mentioned.Message ID: @.***>
finetuning the model requires a lot of computing power, dreambooth alone isn't gonna cut it
Hmmmm, good to know.
On Fri, Nov 25, 2022 at 1:37 PM, Ben @.***> wrote:
finetuning the model requires a lot of computing power, dreambooth alone isn't gonna cut it
— Reply to this email directly, view it on GitHub https://github.com/TheLastBen/fast-stable-diffusion/issues/612#issuecomment-1327777987, or unsubscribe https://github.com/notifications/unsubscribe-auth/A4A3AP6Q5CTMENTLS4Y2ZOLWKEBQBANCNFSM6AAAAAASKX2ZN4 . You are receiving this because you were mentioned.Message ID: @.***>
so how do we use 1.5. i just get this error
if you're using a 1.5 model, you need to set it to 1.5 in the download model cell and rerun the requirements cell