latent-diffusion
latent-diffusion copied to clipboard
Solving environment: failed [ResolvePackageNotFound: - cudatoolkit=11.3.1]
Hey there!
I'm using macOS Monterey. I installed Anaconda via the Installer (https://www.anaconda.com/products/individual) and then when running:
conda env create -f environment.yaml
It fails with:

Do you have any idea why?
Hi, I have the same problem,
I tried adding conda-forge to the environment.yaml channels, but still no luck. I ended up un-pinning it
diff --git a/environment.yaml b/environment.yaml
index f36b0e1..2620d59 100644
--- a/environment.yaml
+++ b/environment.yaml
@@ -1,11 +1,14 @@
name: ldm
channels:
- pytorch
+ - nvidia
+ - conda-forge
+ - anaconda
- defaults
dependencies:
- python=3.8.5
- pip=20.3
- - cudatoolkit=11.0
+ - cudatoolkit
- pytorch=1.7.0
- torchvision=0.8.1
- numpy=1.19.2
https://stackoverflow.com/questions/64589421/packagesnotfounderror-cudatoolkit-11-1-0-when-installing-pytorch
I had the same issue with macOS Monterey 12.3.1 on my Intel Mac. I couldn't find a channel could get cudatoolkit 11.0, so I let conda install whatever cudatoolkit it could, ended up being 9.0. Then when I ran python scripts/txt2img.py --prompt "a virus monster is playing guitar, oil on canvas" --ddim_eta 0.0 --n_samples 4 --n_iter 4 --scale 5.0 --ddim_steps 50 it eventually gave me "AssertionError: Torch not compiled with CUDA enabled".
I used the advice from here and downgraded torchvision to 0.6.0, but that version requires pytorch 1.5.0 instead of the 1.7.0 specified in the environment.yaml for this project, so running on torchvision0.6.0 and pytorch 1.5.0 threw "ModuleNotFoundError: No module named 'torch.optim.swa_utils'". Also, pytorch 1.5.0 has a conflict with pytorch lightning, so there are all kinds of issues here.
I am wondering if this comes from an issue with not finding cudatoolkit 11.0 or something else?
I have same issue. is there any clear solution ?
I do believe it to be an issue with cudatoolkit < 11.0; eventually I just ran it on Linux instead to avoid issues with Mac, and that worked. Fair warning though, it is very resource-intensive for GPU and even a friend of mine with a 2080 couldn't get it to run.
I finally managed to make it work on MacOS. These are the steps:
- Remove/Comment in
environment.yamlthe line with cudatoolkit - In
environment.yamlchange pytorch-lightning version to 1.6.1 - In
txt2img.pycommentmodel.cuda()(line 28) - There are some instances in code where pytorch module is sent to device 'cuda'. Just change them to 'cpu', e.g.
ldm/models/diffusion/ddim.pyline 21
@vladhondru25 yep that worked for me, and just be clear to others, there are many spots you have to change the device from "cuda" to "cpu", not just the one you mentioned.
It's dog-slow, but it works, Thanks!
should be added as a bugfix, someone should submit a pull request for this. i noticed some other projects that use stable diffusion do indeed have a cpu flag to tell it to use cpu instead. major fix for anyone using older macs, especially maybe artists that arent yet successful enough to upgrade their environment to something more cuda compatible.
note it does do this check now on line 242:
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
however the line u mentioned is still calling cuda above it (now on line 63):
model.cuda()