stable-diffusion-webui-docker icon indicating copy to clipboard operation
stable-diffusion-webui-docker copied to clipboard

CUDA 12.8 for support of RTX5000 (no kernel image is available for execution on the device CUDA kernel)

Open SuperPat45 opened this issue 8 months ago • 1 comments

Has this issue been opened before?

  • [X] It is not in the FAQ, I checked.
  • [X] It is not in the issues, I searched.

Describe the bug

Running in a docker compose, generating an image throw the error:

RuntimeError: CUDA error: no kernel image is available for execution on the device CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

This error appear probably because RTX5090 need CUDA 12.8

Can you upgrade your image to the last Pytorch 2.7.0 with CUDA12.8?

Which UI

auto

Hardware / Software

  • OS: Ubuntu
  • OS version: 24.10
  • WSL version (if applicable):
  • Docker Version: 28.0.4
  • Docker compose version: 22.34.0
  • Repo version: simonmcnair/automatic1111:master
  • RAM: 32GB
  • GPU/VRAM: RTX 5090 / 32GB

Steps to Reproduce

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Additional context This is th full logs:

sdwebui  | /stable-diffusion-webui
sdwebui  | total 784K
sdwebui  | drwxr-xr-x  1 root root 4.0K Apr 14 17:39 .
sdwebui  | drwxr-xr-x  1 root root 4.0K May  2 12:51 ..
sdwebui  | -rw-r--r--  1 root root   48 Apr 14 17:38 .eslintignore
sdwebui  | -rw-r--r--  1 root root 3.4K Apr 14 17:38 .eslintrc.js
sdwebui  | drwxr-xr-x  8 root root 4.0K Apr 14 17:38 .git
sdwebui  | -rw-r--r--  1 root root   55 Apr 14 17:38 .git-blame-ignore-revs
sdwebui  | drwxr-xr-x  4 root root 4.0K Apr 14 17:38 .github
sdwebui  | -rw-r--r--  1 root root  573 Apr 14 17:38 .gitignore
sdwebui  | -rw-r--r--  1 root root  119 Apr 14 17:38 .pylintrc
sdwebui  | -rw-r--r--  1 root root  95K Apr 14 17:38 CHANGELOG.md
sdwebui  | -rw-r--r--  1 root root  243 Apr 14 17:38 CITATION.cff
sdwebui  | -rw-r--r--  1 root root  657 Apr 14 17:38 CODEOWNERS
sdwebui  | -rw-r--r--  1 root root  35K Apr 14 17:38 LICENSE.txt
sdwebui  | -rw-r--r--  1 root root  13K Apr 14 17:38 README.md
sdwebui  | -rw-r--r--  1 root root  146 Apr 14 17:38 _typos.toml
sdwebui  | drwxr-xr-x  2 root root 4.0K Apr 14 17:38 configs
sdwebui  | drwxr-xr-x  2 root root 4.0K Apr 14 17:38 embeddings
sdwebui  | -rw-r--r--  1 root root  167 Apr 14 17:38 environment-wsl2.yaml
sdwebui  | drwxr-xr-x  2 root root 4.0K Apr 14 17:38 extensions
sdwebui  | drwxr-xr-x 13 root root 4.0K Apr 14 17:38 extensions-builtin
sdwebui  | drwxr-xr-x  2 root root 4.0K Apr 14 17:38 html
sdwebui  | drwxr-xr-x  2 root root 4.0K Apr 14 17:39 interrogate
sdwebui  | drwxr-xr-x  2 root root 4.0K Apr 14 17:38 javascript
sdwebui  | -rw-r--r--  1 root root 1.3K Apr 14 17:38 launch.py
sdwebui  | drwxr-xr-x  2 root root 4.0K Apr 14 17:38 localizations
sdwebui  | drwxr-xr-x  7 root root 4.0K Apr 14 17:38 models
sdwebui  | drwxr-xr-x  7 root root 4.0K Apr 14 17:38 modules
sdwebui  | -rw-r--r--  1 root root  185 Apr 14 17:38 package.json
sdwebui  | -rw-r--r--  1 root root  841 Apr 14 17:38 pyproject.toml
sdwebui  | drwxr-xr-x  8 root root 4.0K Apr 14 17:36 repositories
sdwebui  | -rw-r--r--  1 root root   49 Apr 14 17:38 requirements-test.txt
sdwebui  | -rw-r--r--  1 root root  389 Apr 14 17:38 requirements.txt
sdwebui  | -rw-r--r--  1 root root   42 Apr 14 17:38 requirements_npu.txt
sdwebui  | -rw-r--r--  1 root root  693 Apr 14 17:38 requirements_versions.txt
sdwebui  | -rw-r--r--  1 root root 411K Apr 14 17:38 screenshot.png
sdwebui  | -rw-r--r--  1 root root 6.5K Apr 14 17:38 script.js
sdwebui  | drwxr-xr-x  2 root root 4.0K Apr 14 17:38 scripts
sdwebui  | -rw-r--r--  1 root root  43K Apr 14 17:38 style.css
sdwebui  | drwxr-xr-x  4 root root 4.0K Apr 14 17:38 test
sdwebui  | drwxr-xr-x  2 root root 4.0K Apr 14 17:38 textual_inversion_templates
sdwebui  | -rw-r--r--  1 root root  751 Apr 14 17:38 webui-macos-env.sh
sdwebui  | -rw-r--r--  1 root root   84 Apr 14 17:38 webui-user.bat
sdwebui  | -rw-r--r--  1 root root 1.4K Apr 14 17:38 webui-user.sh
sdwebui  | -rw-r--r--  1 root root 2.5K Apr 14 17:38 webui.bat
sdwebui  | -rw-r--r--  1 root root 5.3K Apr 14 17:38 webui.py
sdwebui  | -rwxr-xr-x  1 root root  11K Apr 14 17:38 webui.sh
sdwebui  | skipping directory .
sdwebui  | skipping directory .
sdwebui  | Mounted .cache
sdwebui  | Mounted config_states
sdwebui  | mkdir: created directory '/stable-diffusion-webui/repositories/CodeFormer'
sdwebui  | mkdir: created directory '/stable-diffusion-webui/repositories/CodeFormer/weights'
sdwebui  | Mounted .cache
sdwebui  | Mounted embeddings
sdwebui  | Mounted config.json
sdwebui  | Mounted models
sdwebui  | Mounted styles.csv
sdwebui  | Mounted ui-config.json
sdwebui  | Mounted extensions
sdwebui  | Installing extension dependencies (if any)
sdwebui  | /opt/conda/lib/python3.10/site-packages/timm/models/layers/__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
sdwebui  |   warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
sdwebui  | /opt/conda/lib/python3.10/site-packages/torch/cuda/__init__.py:209: UserWarning:
sdwebui  | NVIDIA GeForce RTX 5090 with CUDA capability sm_120 is not compatible with the current PyTorch installation.
sdwebui  | The current PyTorch install supports CUDA capabilities sm_50 sm_60 sm_61 sm_70 sm_75 sm_80 sm_86 sm_90.
sdwebui  | If you want to use the NVIDIA GeForce RTX 5090 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
sdwebui  |
sdwebui  |   warnings.warn(
sdwebui  | Calculating sha256 for /stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors: Running on local URL:  http://0.0.0.0:7860
sdwebui  |
sdwebui  | To create a public link, set `share=True` in `launch()`.
sdwebui  | Startup time: 3.3s (import torch: 1.4s, import gradio: 0.4s, setup paths: 0.5s, initialize shared: 0.1s, other imports: 0.2s, load scripts: 0.2s, create ui: 0.2s, gradio launch: 0.2s).
sdwebui  | 6ce0161689b3853acaa03779ec93eafe75a02f4ced659bee03f50797806fa2fa
sdwebui  | Loading weights [6ce0161689] from /stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors
sdwebui  | Creating model from config: /stable-diffusion-webui/configs/v1-inference.yaml
sdwebui  | /opt/conda/lib/python3.10/site-packages/huggingface_hub/file_download.py:896: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
sdwebui  |   warnings.warn(
sdwebui  | Applying attention optimization: Doggettx... done.
sdwebui  | loading stable diffusion model: RuntimeError
sdwebui  | Traceback (most recent call last):
sdwebui  |   File "/opt/conda/lib/python3.10/threading.py", line 973, in _bootstrap
sdwebui  |     self._bootstrap_inner()
sdwebui  |   File "/opt/conda/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
sdwebui  |     self.run()
sdwebui  |   File "/opt/conda/lib/python3.10/threading.py", line 953, in run
sdwebui  |     self._target(*self._args, **self._kwargs)
sdwebui  |   File "/stable-diffusion-webui/modules/initialize.py", line 149, in load_model
sdwebui  |     shared.sd_model  # noqa: B018
sdwebui  |   File "/stable-diffusion-webui/modules/shared_items.py", line 175, in sd_model
sdwebui  |     return modules.sd_models.model_data.get_sd_model()
sdwebui  |   File "/stable-diffusion-webui/modules/sd_models.py", line 693, in get_sd_model
sdwebui  |     load_model()
sdwebui  |   File "/stable-diffusion-webui/modules/sd_models.py", line 869, in load_model
sdwebui  |     sd_model.cond_stage_model_empty_prompt = get_empty_cond(sd_model)
sdwebui  |   File "/stable-diffusion-webui/modules/sd_models.py", line 728, in get_empty_cond
sdwebui  |     d = sd_model.get_learned_conditioning([""])
sdwebui  |   File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 669, in get_learned_conditioning
sdwebui  |     c = self.cond_stage_model(c)
sdwebui  |   File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
sdwebui  |     return self._call_impl(*args, **kwargs)
sdwebui  |   File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
sdwebui  |     return forward_call(*args, **kwargs)
sdwebui  |   File "/stable-diffusion-webui/modules/sd_hijack_clip.py", line 313, in forward
sdwebui  |     return super().forward(texts)
sdwebui  |   File "/stable-diffusion-webui/modules/sd_hijack_clip.py", line 227, in forward
sdwebui  |     z = self.process_tokens(tokens, multipliers)
sdwebui  |   File "/stable-diffusion-webui/modules/sd_hijack_clip.py", line 269, in process_tokens
sdwebui  |     z = self.encode_with_transformers(tokens)
sdwebui  |   File "/stable-diffusion-webui/modules/sd_hijack_clip.py", line 352, in encode_with_transformers
sdwebui  |     outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
sdwebui  |   File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
sdwebui  |     return self._call_impl(*args, **kwargs)
sdwebui  |   File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1582, in _call_impl
sdwebui  |     result = forward_call(*args, **kwargs)
sdwebui  |   File "/opt/conda/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 822, in forward
sdwebui  |     return self.text_model(
sdwebui  |   File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
sdwebui  |     return self._call_impl(*args, **kwargs)
sdwebui  |   File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
sdwebui  |     return forward_call(*args, **kwargs)
sdwebui  |   File "/opt/conda/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 730, in forward
sdwebui  |     hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids)
sdwebui  |   File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
sdwebui  |     return self._call_impl(*args, **kwargs)
sdwebui  |   File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
sdwebui  |     return forward_call(*args, **kwargs)
sdwebui  |   File "/opt/conda/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 227, in forward
sdwebui  |     inputs_embeds = self.token_embedding(input_ids)
sdwebui  |   File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
sdwebui  |     return self._call_impl(*args, **kwargs)
sdwebui  |   File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
sdwebui  |     return forward_call(*args, **kwargs)
sdwebui  |   File "/stable-diffusion-webui/modules/sd_hijack.py", line 351, in forward
sdwebui  |     inputs_embeds = self.wrapped(input_ids)
sdwebui  |   File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
sdwebui  |     return self._call_impl(*args, **kwargs)
sdwebui  |   File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
sdwebui  |     return forward_call(*args, **kwargs)
sdwebui  |   File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/sparse.py", line 163, in forward
sdwebui  |     return F.embedding(
sdwebui  |   File "/opt/conda/lib/python3.10/site-packages/torch/nn/functional.py", line 2264, in embedding
sdwebui  |     return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
sdwebui  | RuntimeError: CUDA error: no kernel image is available for execution on the device
sdwebui  | CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
sdwebui  | For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
sdwebui  | Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
sdwebui  |
sdwebui  |
sdwebui  |
sdwebui  | Stable diffusion model failed to load
sdwebui  | Exception in thread Thread-2 (load_model):
sdwebui  | Traceback (most recent call last):
sdwebui  |   File "/opt/conda/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
sdwebui  |     self.run()
sdwebui  |   File "/opt/conda/lib/python3.10/threading.py", line 953, in run
sdwebui  |     self._target(*self._args, **self._kwargs)
sdwebui  |   File "/stable-diffusion-webui/modules/initialize.py", line 154, in load_model
sdwebui  |     devices.first_time_calculation()
sdwebui  |   File "/stable-diffusion-webui/modules/devices.py", line 281, in first_time_calculation
sdwebui  |     conv2d(x)
sdwebui  |   File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
sdwebui  |     return self._call_impl(*args, **kwargs)
sdwebui  |   File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
sdwebui  |     return forward_call(*args, **kwargs)
sdwebui  |   File "/stable-diffusion-webui/extensions-builtin/Lora/networks.py", line 599, in network_Conv2d_forward
sdwebui  |     return originals.Conv2d_forward(self, input)
sdwebui  |   File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 460, in forward
sdwebui  |     return self._conv_forward(input, self.weight, self.bias)
sdwebui  |   File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 456, in _conv_forward
sdwebui  |     return F.conv2d(input, weight, bias, self.stride,
sdwebui  | RuntimeError: CUDA error: no kernel image is available for execution on the device
sdwebui  | CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
sdwebui  | For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
sdwebui  | Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
sdwebui  |
sdwebui  | Using already loaded model v1-5-pruned-emaonly.safetensors [6ce0161689]: done in 0.0s
sdwebui  | Downloading VAEApprox model to: /stable-diffusion-webui/models/VAE-approx/model.pt
100%|██████████| 209k/209k [00:00<00:00, 13.1MB/s]
sdwebui  | *** Error completing request
sdwebui  | *** Arguments: ('task(p9t4yi49naj3fmz)', <gradio.routes.Request object at 0x77daecaa9870>, 'dogs', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, 'from modules.processing import process_images\n\np.width = 768\np.height = 768\np.batch_size = 2\np.steps = 10\n\nreturn process_images(p)', 2, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
sdwebui  |     Traceback (most recent call last):
sdwebui  |       File "/stable-diffusion-webui/modules/call_queue.py", line 74, in f
sdwebui  |         res = list(func(*args, **kwargs))
sdwebui  |       File "/stable-diffusion-webui/modules/call_queue.py", line 53, in f
sdwebui  |         res = func(*args, **kwargs)
sdwebui  |       File "/stable-diffusion-webui/modules/call_queue.py", line 37, in f
sdwebui  |         res = func(*args, **kwargs)
sdwebui  |       File "/stable-diffusion-webui/modules/txt2img.py", line 109, in txt2img
sdwebui  |         processed = processing.process_images(p)
sdwebui  |       File "/stable-diffusion-webui/modules/processing.py", line 847, in process_images
sdwebui  |         res = process_images_inner(p)
sdwebui  |       File "/stable-diffusion-webui/modules/processing.py", line 966, in process_images_inner
sdwebui  |         p.setup_conds()
sdwebui  |       File "/stable-diffusion-webui/modules/processing.py", line 1520, in setup_conds
sdwebui  |         super().setup_conds()
sdwebui  |       File "/stable-diffusion-webui/modules/processing.py", line 502, in setup_conds
sdwebui  |         self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, total_steps, [self.cached_uc], self.extra_network_data)
sdwebui  |       File "/stable-diffusion-webui/modules/processing.py", line 488, in get_conds_with_caching
sdwebui  |         cache[1] = function(shared.sd_model, required_prompts, steps, hires_steps, shared.opts.use_old_scheduling)
sdwebui  |       File "/stable-diffusion-webui/modules/prompt_parser.py", line 188, in get_learned_conditioning
sdwebui  |         conds = model.get_learned_conditioning(texts)
sdwebui  |       File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 669, in get_learned_conditioning
sdwebui  |         c = self.cond_stage_model(c)
sdwebui  |       File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
sdwebui  |         return self._call_impl(*args, **kwargs)
sdwebui  |       File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
sdwebui  |         return forward_call(*args, **kwargs)
sdwebui  |       File "/stable-diffusion-webui/modules/sd_hijack_clip.py", line 313, in forward
sdwebui  |         return super().forward(texts)
sdwebui  |       File "/stable-diffusion-webui/modules/sd_hijack_clip.py", line 227, in forward
sdwebui  |         z = self.process_tokens(tokens, multipliers)
sdwebui  |       File "/stable-diffusion-webui/modules/sd_hijack_clip.py", line 269, in process_tokens
sdwebui  |         z = self.encode_with_transformers(tokens)
sdwebui  |       File "/stable-diffusion-webui/modules/sd_hijack_clip.py", line 352, in encode_with_transformers
sdwebui  |         outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
sdwebui  |       File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
sdwebui  |         return self._call_impl(*args, **kwargs)
sdwebui  |       File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1582, in _call_impl
sdwebui  |         result = forward_call(*args, **kwargs)
sdwebui  |       File "/opt/conda/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 822, in forward
sdwebui  |         return self.text_model(
sdwebui  |       File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
sdwebui  |         return self._call_impl(*args, **kwargs)
sdwebui  |       File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
sdwebui  |         return forward_call(*args, **kwargs)
sdwebui  |       File "/opt/conda/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 730, in forward
sdwebui  |         hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids)
sdwebui  |       File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
sdwebui  |         return self._call_impl(*args, **kwargs)
sdwebui  |       File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
sdwebui  |         return forward_call(*args, **kwargs)
sdwebui  |       File "/opt/conda/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 227, in forward
sdwebui  |         inputs_embeds = self.token_embedding(input_ids)
sdwebui  |       File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
sdwebui  |         return self._call_impl(*args, **kwargs)
sdwebui  |       File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
sdwebui  |         return forward_call(*args, **kwargs)
sdwebui  |       File "/stable-diffusion-webui/modules/sd_hijack.py", line 351, in forward
sdwebui  |         inputs_embeds = self.wrapped(input_ids)
sdwebui  |       File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
sdwebui  |         return self._call_impl(*args, **kwargs)
sdwebui  |       File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
sdwebui  |         return forward_call(*args, **kwargs)
sdwebui  |       File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/sparse.py", line 163, in forward
sdwebui  |         return F.embedding(
sdwebui  |       File "/opt/conda/lib/python3.10/site-packages/torch/nn/functional.py", line 2264, in embedding
sdwebui  |         return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
sdwebui  |     RuntimeError: CUDA error: no kernel image is available for execution on the device
sdwebui  |     CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
sdwebui  |     For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
sdwebui  |     Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
sdwebui  |
sdwebui  |
sdwebui  | ---

This is my docker compose:

services:
  sdwebui:
    image: simonmcnair/automatic1111:master
    container_name: sdwebui
    ports:
      - 7860:7860
    volumes:
      - /opt/openwebui/datasdwebui/models:/data
      - /opt/openwebui/datasdwebui/output:/output
    environment:
      - CLI_ARGS=--allow-code --medvram --xformers --enable-insecure-extension-access --api
    runtime: nvidia
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities: [gpu]

SuperPat45 avatar May 02 '25 13:05 SuperPat45

Try to use this services/AUTOMATIC1111/Dockerfile version

FROM alpine/git:2.36.2 as download

COPY clone.sh /clone.sh

RUN . /clone.sh stable-diffusion-webui-assets https://github.com/AUTOMATIC1111/stable-diffusion-webui-assets.git 6f7db241d2f8ba7457bac5ca9753331f0c266917

RUN . /clone.sh stable-diffusion-stability-ai https://github.com/Stability-AI/stablediffusion.git cf1d67a6fd5ea1aa600c4df58e5b47da45f6bdbf \
  && rm -rf assets data/**/*.png data/**/*.jpg data/**/*.gif

RUN . /clone.sh BLIP https://github.com/salesforce/BLIP.git 48211a1594f1321b00f14c9f7a5b4813144b2fb9
RUN . /clone.sh k-diffusion https://github.com/crowsonkb/k-diffusion.git ab527a9a6d347f364e3d185ba6d714e22d80cb3c
RUN . /clone.sh clip-interrogator https://github.com/pharmapsychotic/clip-interrogator 2cf03aaf6e704197fd0dae7c7f96aa59cf1b11c9
RUN . /clone.sh generative-models https://github.com/Stability-AI/generative-models 45c443b316737a4ab6e40413d7794a7f5657c19f
RUN . /clone.sh stable-diffusion-webui-assets https://github.com/AUTOMATIC1111/stable-diffusion-webui-assets 6f7db241d2f8ba7457bac5ca9753331f0c266917


# FROM pytorch/pytorch:2.3.0-cuda12.1-cudnn8-runtime
FROM pytorch/pytorch:2.7.0-cuda12.8-cudnn9-runtime

# RUN conda create --name stable-diffusion python=3.10
# SHELL ["/opt/conda/bin/conda", "run", "-n", "stable-diffusion", "/bin/bash", "-c"]

# RUN conda init
# RUN conda activate stable-diffusion

ENV DEBIAN_FRONTEND=noninteractive PIP_PREFER_BINARY=1

RUN --mount=type=cache,target=/var/cache/apt \
  apt-get update && \
  # we need those
  apt-get install -y fonts-dejavu-core rsync git jq moreutils aria2 \
  # extensions needs those
  ffmpeg libglfw3-dev libgles2-mesa-dev pkg-config libcairo2 libcairo2-dev build-essential

WORKDIR /
RUN --mount=type=cache,target=/root/.cache/pip \
  git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git && \
  cd stable-diffusion-webui && \
  git reset --hard v1.10.1 && \
  pip install -r requirements_versions.txt

RUN --mount=type=cache,target=/root/.cache/pip \
   pip uninstall -y typing_extensions && \
   pip install typing_extensions==4.11.0

ENV ROOT=/stable-diffusion-webui

COPY --from=download /repositories/ ${ROOT}/repositories/
RUN mkdir ${ROOT}/interrogate && cp ${ROOT}/repositories/clip-interrogator/clip_interrogator/data/* ${ROOT}/interrogate

RUN --mount=type=cache,target=/root/.cache/pip \
  pip install pyngrok \
  git+https://github.com/TencentARC/GFPGAN.git@8d2447a2d918f8eba5a4a01463fd48e45126a379 \
  git+https://github.com/openai/CLIP.git@d50d76daa670286dd6cacf3bcd80b5e4823fc8e1 \
  git+https://github.com/mlfoundations/[email protected]

RUN pip install torch torchvision torchaudio xformers==0.0.30 --index-url https://download.pytorch.org/whl/cu128
RUN pip install aria2p

# there seems to be a memory leak (or maybe just memory not being freed fast enough) that is fixed by this version of malloc
# maybe move this up to the dependencies list.
RUN apt-get -y install libgoogle-perftools-dev && apt-get clean
ENV LD_PRELOAD=libtcmalloc.so

COPY . /docker

RUN \
  # mv ${ROOT}/style.css ${ROOT}/user.css && \
  # one of the ugliest hacks I ever wrote \
  # sed -i 's/in_app_dir = .*/in_app_dir = True/g' /opt/conda/lib/python3.10/site-packages/gradio/routes.py && \
  git config --global --add safe.directory '*'

RUN conda install libsqlite=3.48.0

WORKDIR ${ROOT}
ENV NVIDIA_VISIBLE_DEVICES=all
ENV CLI_ARGS=""
EXPOSE 7860
ENTRYPOINT ["/docker/entrypoint.sh"]
CMD python -u webui.py --listen --port 7860 ${CLI_ARGS}

dpk-it avatar May 02 '25 21:05 dpk-it