stable-diffusion icon indicating copy to clipboard operation
stable-diffusion copied to clipboard

Optimized Stable Diffusion modified to run on lower GPU VRAM

Results 99 stable-diffusion issues
Sort by recently updated
recently updated
newest added

How could I load and use a VAE using this repo? (If it's currently not possible, can I request this as a feature?)

I did not use Docker and just installed everything in a virtualenv. getting no module error. anyone have any ideas? Global seed set to 27 Loading model from models/ldm/stable-diffusion-v1/model.ckpt Global...

* Implemented vectorize_prompt() method on top of the script. vectorize_prompt() by https://github.com/consciencia/stable-diffusion * Minor styling corrections.

I am trying to train stable-diffusion models for my custom text2img dataset. Still struggling to fit into my 3090 24GB VRAM; maybe this code is the solution. Example config YAML...

When I run the docker compose command: `docker compose up --build` I get this error: `Error response from daemon: could not select device driver "nvidia" with capabilities: [[gpu]]` I can...

Hi, I'm playing around with this lib on Ubuntu 22.04.1, and when I try to generate images from the CLI using the scripts (I tried `txt2img_gradio` & `img2img_gradio`), all that...

I Tried to use the img2img script. I got : ``` RuntimeError: CUDA out of memory. Tried to allocate 1.75 GiB (GPU 0; 8.00 GiB total capacity; 5.09 GiB already...

Got this issue with some samplers lms,dpm2, etc ![samplerissue-lm](https://user-images.githubusercontent.com/3255994/190896359-a11d26a0-e80e-4db6-8e1e-31932e23a80a.PNG)

hi,thank your for your code,I have test your optimized_txt2img.py,the inference time is indeed about 24-26 sec per image.does the inference time can decrease to 14-16sec if the sd model has...

do we need to update the model file when run optimizedSD/optimize_txt2img.py?