stable-diffusion-webui-docker
stable-diffusion-webui-docker copied to clipboard
Please help me got RuntimeError: CUDA out of memory.
when i perform docker compose --profile hlky up --build this is error log: /mnt/d/stable-diffusion-webui-docker$ docker compose --profile hlky up --build [+] Building 9.5s (18/18) FINISHED => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 32B 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => resolve image config for docker.io/docker/dockerfile:1 5.4s => CACHED docker-image://docker.io/docker/dockerfile:1@sha256:9ba7531bd80fb0a858632727cf7a112fbfd19b17e94c4e84ced81e24ef1a0dbc 0.0s => [internal] load build definition from Dockerfile 0.0s => [internal] load .dockerignore 0.0s => [internal] load metadata for docker.io/continuumio/miniconda3:4.12.0 3.9s => [1/9] FROM docker.io/continuumio/miniconda3:4.12.0@sha256:977263e8d1e476972fddab1c75fe050dd3cd17626390e874448bd92721fd659b 0.0s => [internal] load build context 0.0s => => transferring context: 132B 0.0s => CACHED [2/9] RUN conda install python=3.8.5 && conda clean -a -y 0.0s => CACHED [3/9] RUN conda install pytorch==1.11.0 torchvision==0.12.0 cudatoolkit=11.3 -c pytorch && conda clean -a -y 0.0s => CACHED [4/9] RUN apt-get update && apt install fonts-dejavu-core rsync gcc -y && apt-get clean 0.0s => CACHED [5/9] RUN <<EOF (git config --global http.postBuffer 1048576000...) 0.0s => CACHED [6/9] RUN <<EOF (cd stable-diffusion...) 0.0s => CACHED [7/9] COPY . /docker/ 0.0s => CACHED [8/9] RUN python /docker/info.py /stable-diffusion/frontend/frontend.py && chmod +x /docker/mount.sh 0.0s => CACHED [9/9] WORKDIR /stable-diffusion 0.0s => exporting to image 0.0s => => exporting layers 0.0s => => writing image sha256:9368ebf0b1458683ef73e8493e835f0c5a3cb5c5691bcd4ea1a525dc542fb39f 0.0s => => naming to docker.io/library/webui-docker-hlky 0.0s
Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them [+] Running 1/0 ⠿ Container webui-docker-hlky-1 Created 0.0s Attaching to webui-docker-hlky-1 webui-docker-hlky-1 | + /docker/mount.sh webui-docker-hlky-1 | Mounted .cache webui-docker-hlky-1 | Mounted LDSR webui-docker-hlky-1 | Mounted RealESRGAN webui-docker-hlky-1 | Mounted StableDiffusion webui-docker-hlky-1 | Mounted GFPGANv1.4.pth webui-docker-hlky-1 | Mounted GFPGANv1.4.pth webui-docker-hlky-1 | + python3 -u scripts/webui.py --outdir /output --ckpt /data/StableDiffusion/model.ckpt --optimized-turbo webui-docker-hlky-1 | Found GFPGAN webui-docker-hlky-1 | Found RealESRGAN webui-docker-hlky-1 | Found LDSR webui-docker-hlky-1 | Loading model from /data/StableDiffusion/model.ckpt webui-docker-hlky-1 | Global Step: 470000 webui-docker-hlky-1 | UNet: Running in eps-prediction mode webui-docker-hlky-1 | Traceback (most recent call last): webui-docker-hlky-1 | File "scripts/webui.py", line 530, in webui-docker-hlky-1 | model,modelCS,modelFS,device, config = load_SD_model() webui-docker-hlky-1 | File "scripts/webui.py", line 501, in load_SD_model webui-docker-hlky-1 | model.cuda() webui-docker-hlky-1 | File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/core/mixins/device_dtype_mixin.py", line 132, in cuda webui-docker-hlky-1 | return super().cuda(device=device) webui-docker-hlky-1 | File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 688, in cuda webui-docker-hlky-1 | return self._apply(lambda t: t.cuda(device)) webui-docker-hlky-1 | File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 578, in _apply webui-docker-hlky-1 | module._apply(fn) webui-docker-hlky-1 | File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 578, in _apply webui-docker-hlky-1 | module._apply(fn) webui-docker-hlky-1 | File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 578, in _apply webui-docker-hlky-1 | module._apply(fn) webui-docker-hlky-1 | [Previous line repeated 3 more times] webui-docker-hlky-1 | File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 601, in _apply webui-docker-hlky-1 | param_applied = fn(param) webui-docker-hlky-1 | File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 688, in webui-docker-hlky-1 | return self._apply(lambda t: t.cuda(device)) webui-docker-hlky-1 | RuntimeError: CUDA out of memory. Tried to allocate 58.00 MiB (GPU 0; 12.00 GiB total capacity; 966.10 MiB already allocated; 8.61 GiB free; 1012.00 MiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF webui-docker-hlky-1 exited with code 1
can you try running it from powershell, not WSL?
This issue is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 7 days.
This issue was closed because it has been stalled for 7 days with no activity.