stablediffusion
stablediffusion copied to clipboard
Installation seems unclear
What is a .ckpt file? There doesn't seem to be any in the repo but it's used in the example. Are we supposed to download these or are they under a different name?
Can a full command line
python scripts/img2img.py --prompt "A fantasy landscape, trending on artstation" --init-img <path-to-img.jpg> --strength 0.8 --ckpt <path/to/model.ckpt>
Be simplified so that model.ckpt is already in the repro or else downloaded?
The two .ckpt files can be downloaded as per the instructions here:
https://github.com/Stability-AI/stablediffusion#reference-sampling-script
First, download the weights for SD2.0-v and SD2.0-base.
.ckpt files are known as models or weights. They represent all the AI's knowledge from training (and as such are about 5 Gigabytes in size). Stability AI produced several models for SD 2.0 trained on different things. There's a base model that was trained to produce 512x512 images, a model trained to produce high resolution 768x768 images, an "inpainting" model trained to modify parts of existing images, and a depthmap model trained to convert an image+depthmap into a new image while preserving its 3D structure.
There are plenty of old .ckpt models for Stable Diffusion 1.5 that are not compatible with Stable Diffusion 2.0. You need to download the new 2.0 models from HuggingFace.
Can you put a full example in the readme where you curl one of the models and then run it?
I agree with @zackees, it is not really easy to spot those links in the readme.
The setup instructions also a assume the user is updating from an existing LDM/SD1 activated conda env, and installing requirements.txt is not mentioned, leaving some packages missings.
These are kind of a noob trap. :p
Run
conda env create -f environment.yaml
conda activate ldm
first (pasted from the stable-diffusion-v1 requirements). Then do the rest from stable-diffusion-v2 requirements. Stay in the activated "ldm" conda env all the time!
Also note that the CUDA_HOME needs to be "~/.conda/envs/ldm" when installing it via conda (thus export CUDA_HOME=~/.conda/envs/ldm
). Else the xformers step "pip install -e ." will fail.
Tested in Fedora 36.
Can you put a full example in the readme where you curl one of the models and then run it?
If you know Google Colab, you can try to run stable_diffusion.ipynb
.
Thanks to 🤗, the procedure is much simpler than it was at first. There is the pain of installing xformers
, but it is circumventable.
Direct download links for SD2.0-v and SD2.0-base. I put them in the models folder, but this is not mandatory. Just make sure you reference them in the txt2img command line.
Direct download links for SD2.0-v and SD2.0-base. I put them in the models folder, but this is not mandatory. Just make sure you reference them in the txt2img command line.
i don't understand. How are 2 files of 5.2 Gb each, not mandatory?! And i they're not than why are we downloading them?!
It is not mandatory to "put them in the models folder".
It is required to download at least one of the model checkpoints, and to specify the path to it with: https://github.com/Stability-AI/stablediffusion/blob/47b6b607fdd31875c9279cd2f4f16b92e4ea958e/scripts/txt2img.py#L150-L154
It is not mandatory to "put them in the models folder".
It is required to download at least one of the model checkpoints, and to specify the path to it with:
https://github.com/Stability-AI/stablediffusion/blob/47b6b607fdd31875c9279cd2f4f16b92e4ea958e/scripts/txt2img.py#L150-L154
got it! thank you for answering!
I subscribe! Installation seems totally unclear. Can someone update the installation instructions please? On the other hand, simply putting the ckpt files in "stable-diffusion-webui/models" folder seems to work.
Run
conda env create -f environment.yaml conda activate ldm
first (pasted from the stable-diffusion-v1 requirements). Then do the rest from stable-diffusion-v2 requirements. Stay in the activated "ldm" conda env all the time! Also note that the CUDA_HOME needs to be "~/.conda/envs/ldm" when installing it via conda (thus
export CUDA_HOME=~/.conda/envs/ldm
). Else the xformers step "pip install -e ." will fail.
...then download the model you want to use (SD2.0-v or SD2.0-base) somewhere and run the txt2img.py script. That's basically it.
Github has a 100MB limit fyi so they would have to be listed elsewhere
thank you for your reply. i allready did that. i manage too get up to this point
Global Step: 140000 LatentDiffusion: Running in eps-prediction mode Traceback (most recent call last):................. .....................stable-diffusion/ldm/util.py", line 85, in instantiate_from_config return get_obj_from_str(config["target"])(**config.get("params", dict())) TypeError: init() got an unexpected keyword argument 'use_linear_in_transformer'
and i get stuck here
thank you for your reply. i allready did that. i manage too get up to this point
Global Step: 140000 LatentDiffusion: Running in eps-prediction mode Traceback (most recent call last):................. .....................stable-diffusion/ldm/util.py", line 85, in instantiate_from_config return get_obj_from_str(config["target"])(**config.get("params", dict())) TypeError: init() got an unexpected keyword argument 'use_linear_in_transformer'
and i get stuck here
Your installation seems to work. If you've used the txt2img command line from the start page (except the individual path to your downloaded model) I'd open a bug here. This does not seem to be about the installation...
thank you for your reply. i allready did that. i manage too get up to this point Global Step: 140000 LatentDiffusion: Running in eps-prediction mode Traceback (most recent call last):................. .....................stable-diffusion/ldm/util.py", line 85, in instantiate_from_config return get_obj_from_str(config["target"])(**config.get("params", dict())) TypeError: init() got an unexpected keyword argument 'use_linear_in_transformer' and i get stuck here
Your installation seems to work. If you've used the txt2img command line from the start page (except the individual path to your downloaded model) I'd open a bug here. This does not seem to be about the installation...
this is what i use "python scripts/txt2img.py --prompt "a professional photograph of an astronaut riding a horse" --ckpt /home/me/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.ckpt --config configs/stable-diffusion/v2-inference-v.yaml --H 768 --W 768"
And this is what i get
`Global seed set to 42
Loading model from /home/me/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.ckpt
Global Step: 140000
LatentDiffusion: Running in eps-prediction mode
Traceback (most recent call last):
File "/home/me/stable-diffusion/scripts/txt2img.py", line 352, in
But i have 2 folders on my home/user path. I have "stable-diffusion" which is this repo here cloned. And i have a "Stable-diffusion-webui" folder where i have the ckpt files downloaded, because the install directions give here are unclear and i don't know where the ckpt should be located in the stable-diffusion folder. So i guess maybe that should be a issue although basically i don't know what the error is about....
I don't think it's a bug, i think it's just some config or path issue, because, again, the installation directions are unclear... for a user that doesn't really know linux.
i too am seeing the same error:
Python.Runtime.PythonException: TypeError : init() got an unexpected keyword argument 'use_linear_in_transformer'
Im upgrading from my own custom environment, and assume Im missing a package. However, i've upgraded/added everything in the environment.yaml, but still getting this error.
Any ideas on what packages may be missing? google turns up nothing, unfortunately.
'use_linear_in_transformer' error was fixed by a step i skipped, since i was afraid of wrecking my currently attached and running diffusion stuff:
pip install -e .
in the stablediffusion checkout.
followed by installing open_clip_torch
Same problem
still not solved, any help plz?
“use_linear_in_transformer”错误已通过我跳过的步骤修复,因为我担心破坏我当前连接和运行的扩散内容:
pip install -e 。
在稳定扩散结帐中。
然后安装 open_clip_torch
请问你解决这个错误了吗