big-sleep
big-sleep copied to clipboard
What GPU is required to run it ?
I tried executing the dream command on my laptop computer with a Quadro P2000 with 4 GB of vram. I got a CUDA out of memory error.
@Eddh you'll need a graphics card with at least 9 GB of ram!
@Eddh the minimum I ran this on is a 2070 SUPER: 8GB ram which is all used. nothing else running on the GPU basically, and it gets through the full process.
@remdu Just use the new colab link if you don't have the processing power :)
An 8 GB GPU seems to work if it's a second GPU so its base VRAM usage is 0.0 GB.
Do you think there is any param that can be tuned to run on some lower, yet quite powerfull (i think), gpu?
RuntimeError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 6.00 GiB total capacity; 3.97 GiB already allocated; 117.08 MiB free; 3.99 GiB reserved in total by PyTorch)
from what I understood all around this depends from the batch size.. so sad I can't run this (even by waiting some more minutes)
@iGio90 can you try with image_size=128 ? may pull a smaller model.. not sure it will work, but try that
Yeah, I tried reducing to 128 but I had the same memory error
@iGio90 I feel you, these days Video RAM is very important to run models. There's a kind of assumption on the research side that the card should have > 11GB RAM.. models converge to that runtime size.
This can be made to run on CPU, but on a i7-6700 it took ~10 hrs to generate an image
This can be made to run on CPU, but on a i7-6700 it took ~10 hrs to generate an image
nope, it will throw "assert torch.cuda.is_available(), 'CUDA must be available in order to use Deep Daze'"
Damn. You all say: "A GPU with 8GB VRAM is required" and "Everyone with A GPU". I have an AMD RX570 with 8GBN VRAM. Imagine how I feel suddenly realizing after installation that this only works with CUDA. I'm so sad and mad... I can't do any cool AI stuff with my card. FFS Edit: Mad at me for buying amd
Damn. You all say: "A GPU with 8GB VRAM is required" and "Everyone with A GPU". I have an AMD RX570 with 8GBN VRAM. Imagine how I feel suddenly realizing after installation that this only works with CUDA. I'm so sad and mad... I can't do any cool AI stuff with my card. FFS Edit: Mad at me for buying amd
Use Colab or ROCm.
Would that work with this and others programs? How easy to implement is that? Edit: ROCm is only for Linux and Colab is not using my hardware, right? So it's probably really slow. :( EDIT: Darn. I just saw that there are pre-made colab notebooks. I made my own. Well whatever. It seems to work, but I don't know HOW slow it is. How slow is 1.41 it/s?
Would that work with this and others programs? How easy to implement is that? Edit: ROCm is only for Linux and Colab is not using my hardware, right? So it's probably really slow. :( EDIT: Darn. I just saw that there are pre-made colab notebooks. I made my own. Well whatever. It seems to work, but I don't know HOW slow it is. How slow is 1.41 it/s?
Colab using a P100 is definitely much faster than your RX 570, and you have twice the VRAM.
Colab using a P100 is definitely much faster than your RX 570, and you have twice the VRAM.
This made me read through things and apparently it uses either one of these and you can't choose: Nvidia K80s, T4s, P4s and P100s. And there are uncertain limits to GPU and VM usage. So it can just suddenly stop running on GPU with no warning. There's also way to appear to be local notebooks, which would use my hardware, that run via Jupyter - but of course, I don't have an nVidia card.
Edit: I've seen it running on a K80 and a T4. The K80 is incredibly slow in comparison to the T4. It takes about 5.6 seconds per iteration, while the T4 produces 1.7 iterations per second. I guess my RX570 would also be around the K80 level of performance.
Edit: I now experienced all four available cards:
# Tesla T4 (fast: 1.70 it/S)
# Tesla P100 (OK: 1.10 it/S)
# Tesla P4 (OK: 1.55 S/it)
# Tesla K80 (very slow: 5.5 S/it)
HI, I am considering buying nvidia quadro m4000 or m6000. Will it work with big sleep?
HI, I am considering buying nvidia quadro m4000 or m6000. Will it work with big sleep?
From what I've seen, the dedicated VRAM is the constraint. I'm using a rtx 3070 with 8GB dedicated VRAM. It is barely enough VRAM to run Big Sleep, but the actual GPU processing power is barely used.
I've been able to get VRAM usage down near 6GB and even 4GB by lowering image_size and num_cutouts parameters. --num-cutouts=16 and --image-size=128 should work on a 4GB card, but I haven't tested yet.
Looks like the m4000 has 8GB of VRAM and the m6000 has 24GB (!!) so either should work.
can i use shared GPU memory instead of dedicated? if so, how?
Just to make this clear: is there any way to make this work with an AMD (Radeon 5700) card?
I don't REALLY know, but you can always use the colab notebook. I forked the notebook with better description and more settings: https://colab.research.google.com/drive/1zVHK4t3nXQTsu5AskOOOf3Mc9TnhltUO?usp=sharing