stable-diffusion
stable-diffusion copied to clipboard
Please default to CPU-only PyTorch if no suitable GPU is detected.
PyTorch is defaulting to NVIDIA GPU, but it would be good to fall back to CPU-only if no suitable GPU is found.
Something like this perhaps? https://github.com/ModeratePrawn/stable-diffusion-cpu/pull/1
This would provide a better out-of-the-box setup experience for users with unsupported GPU’s (AMD) until those versions are implemented, and to users like me who have no GPU but want to run locally.
See: #56
@HenkPoley Thanks!
@HenkPoley Thanks!
Hi, I am pretty new but did you figure out how to get it to run with CPU only?
right here https://huggingface.co/spaces/stabilityai/stable-diffusion
right here https://huggingface.co/spaces/stabilityai/stable-diffusion
Hi, yeah that's a really great option albeit a limited one. Can't control the size of the images and also can't use any adult terms because it is censored.
right here https://huggingface.co/spaces/stabilityai/stable-diffusion
Hi, yeah that's a really great option albeit a limited one. Can't control the size of the images and also can't use any adult terms because it is censored.
http://beta.dreamstudio.ai/
right here https://huggingface.co/spaces/stabilityai/stable-diffusion
Hi, yeah that's a really great option albeit a limited one. Can't control the size of the images and also can't use any adult terms because it is censored.
http://beta.dreamstudio.ai/
and that one is paid unless you want to keep switching IP's and creating accounts and still censors your prompts.
With diffusers it's quite easy to run it on CPU only:
# !pip install diffusers
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4")
# remove VAE encoder as it's not needed
del pipe.vae.encoder
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt)["sample"][0]
right here https://huggingface.co/spaces/stabilityai/stable-diffusion
Hi, yeah that's a really great option albeit a limited one. Can't control the size of the images and also can't use any adult terms because it is censored.
http://beta.dreamstudio.ai/
and that one is paid unless you want to keep switching IP's and creating accounts and still censors your prompts.
wrong, just use temp mail. when you run out log out and sign up with a temp mail email. there is no IP tracker. novel ai has IP trackers.
With diffusers it's quite easy to run it on CPU only:
# !pip install diffusers from diffusers import StableDiffusionPipeline pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") # remove VAE encoder as it's not needed del pipe.vae.encoder prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt)["sample"][0]
where do i put this? @patrickvonplaten
Sustain! Where to put?
damn... so clow with CPU.. is it safe? it consumes >> 100% of cpu amount. I have quad core Intel i5. Hope it means 25% usage per core))
del pipe.vae.encoder prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt)["sample"][0] 8%|██████▌ | 4/50 [02:09<22:20, 29.13s/it]
And I've got such message:
Cannot initialize model with low cpu memory usage because accelerate
was not found in the environment. Defaulting to low_cpu_mem_usage=False
. It is strongly recommended to install accelerate
for faster and less memory-intense model loading. You can do so with:
pip install accelerate
Can I install it without exit from interpreter >>>???
And where generated image wil be saved????
after process finished an error appears
Traceback (most recent call last):
File "
what should I do to fix it?