stable-diffusion-webui-forge
stable-diffusion-webui-forge copied to clipboard
[Bug]: RuntimeError: Expected all tensors to be on the same device
Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
What happened?
Stable diffusion model failed to load
Steps to reproduce the problem
setup virtualenv start webui.sh
What should have happened?
Work
What browsers do you use to access the UI ?
No response
Sysinfo
Linux Manjaro kse btrfs
Console logs
################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer.
################################################################
################################################################
Running on alexbespik user
################################################################
################################################################
Repo already cloned, using it as install directory
################################################################
################################################################
python venv already activate or run without venv: /run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/webui_forge_cu121_torch21/webui/TES
################################################################
################################################################
Launching launch.py...
################################################################
Using TCMalloc: libtcmalloc_minimal.so.4
libtcmalloc_minimal.so.4 is not linked with libpthreadand will trigger undefined symbol: ptthread_Key_Create error
Using TCMalloc: libtcmalloc.so.4
libtcmalloc.so.4 is not linked with libpthreadand will trigger undefined symbol: ptthread_Key_Create error
Python 3.11.6 (main, Nov 14 2023, 09:36:21) [GCC 13.2.1 20230801]
Version: f0.0.5-latest-29-g53057f33
Commit hash: 53057f33ed778b064ba96a6bf811524cb0f239b6
Legacy Preprocessor init warning: Unable to install insightface automatically. Please try run `pip install insightface` manually.
Launching Web UI with arguments:
Total VRAM 3904 MB, total RAM 31937 MB
Trying to enable lowvram mode because your GPU seems to have 4GB or less. If you don't want this use: --always-normal-vram
Set vram state to: LOW_VRAM
Device: cuda:0 NVIDIA GeForce GTX 1650 : native
VAE dtype: torch.float32
Using pytorch cross attention
ControlNet preprocessor location: /run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/webui_forge_cu121_torch21/webui/models/ControlNetPreprocessor
Loading weights [15012c538f] from /run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/webui_forge_cu121_torch21/webui/models/Stable-diffusion/realisticVisionV51_v51VAE.safetensors
2024-02-06 20:06:37,574 - ControlNet - INFO - ControlNet UI callback registered.
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 8.5s (prepare environment: 1.8s, import torch: 3.0s, import gradio: 0.8s, setup paths: 0.7s, other imports: 0.5s, load scripts: 0.8s, create ui: 0.5s, gradio launch: 0.3s).
model_type EPS
UNet ADM Dimension 0
QObject::moveToThread: Current thread (0x55c61c1702a0) is not the object's thread (0x55c61c407aa0).
Cannot move to target thread (0x55c61c1702a0)
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/webui_forge_cu121_torch21/webui/TES/lib/python3.11/site-packages/cv2/qt/plugins" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: xcb, eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx.
/usr/bin/xdg-open: line 686: 6665 Aborted (core dumped) "kde-open${KDE_SESSION_VERSION}" "$1"
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
To load target model SD1ClipModel
Begin to load 1 model
loading stable diffusion model: RuntimeError
Traceback (most recent call last):
File "/usr/lib/python3.11/threading.py", line 1002, in _bootstrap
self._bootstrap_inner()
File "/usr/lib/python3.11/threading.py", line 1045, in _bootstrap_inner
self.run()
File "/usr/lib/python3.11/threading.py", line 982, in run
self._target(*self._args, **self._kwargs)
File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/webui_forge_cu121_torch21/webui/modules/initialize.py", line 162, in load_model
shared.sd_model # noqa: B018
File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/webui_forge_cu121_torch21/webui/modules/shared_items.py", line 133, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/webui_forge_cu121_torch21/webui/modules/sd_models.py", line 510, in get_sd_model
load_model()
File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/webui_forge_cu121_torch21/webui/modules/sd_models.py", line 615, in load_model
sd_model.cond_stage_model_empty_prompt = get_empty_cond(sd_model)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/webui_forge_cu121_torch21/webui/modules/sd_models.py", line 540, in get_empty_cond
return sd_model.cond_stage_model([""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/webui_forge_cu121_torch21/webui/TES/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/webui_forge_cu121_torch21/webui/TES/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/webui_forge_cu121_torch21/webui/modules/sd_hijack_clip.py", line 234, in forward
z = self.process_tokens(tokens, multipliers)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/webui_forge_cu121_torch21/webui/modules/sd_hijack_clip.py", line 273, in process_tokens
z = self.encode_with_transformers(tokens)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/webui_forge_cu121_torch21/webui/modules_forge/forge_clip.py", line 9, in encode_with_transformers
outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/webui_forge_cu121_torch21/webui/TES/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/webui_forge_cu121_torch21/webui/TES/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/webui_forge_cu121_torch21/webui/TES/lib/python3.11/site-packages/transformers/models/clip/modeling_clip.py", line 822, in forward
return self.text_model(
^^^^^^^^^^^^^^^^
File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/webui_forge_cu121_torch21/webui/TES/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/webui_forge_cu121_torch21/webui/TES/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/webui_forge_cu121_torch21/webui/TES/lib/python3.11/site-packages/transformers/models/clip/modeling_clip.py", line 730, in forward
hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/webui_forge_cu121_torch21/webui/TES/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/webui_forge_cu121_torch21/webui/TES/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/webui_forge_cu121_torch21/webui/TES/lib/python3.11/site-packages/transformers/models/clip/modeling_clip.py", line 227, in forward
inputs_embeds = self.token_embedding(input_ids)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/webui_forge_cu121_torch21/webui/TES/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/webui_forge_cu121_torch21/webui/TES/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/webui_forge_cu121_torch21/webui/modules/sd_hijack.py", line 177, in forward
inputs_embeds = self.wrapped(input_ids)
^^^^^^^^^^^^^^^^^^^^^^^
File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/webui_forge_cu121_torch21/webui/TES/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/webui_forge_cu121_torch21/webui/TES/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/webui_forge_cu121_torch21/webui/TES/lib/python3.11/site-packages/torch/nn/modules/sparse.py", line 163, in forward
return F.embedding(
^^^^^^^^^^^^
File "/run/media/alexbespik/e8df4068-7043-49ee-928b-ecb0cf9e68fb/webui_forge_cu121_torch21/webui/TES/lib/python3.11/site-packages/torch/nn/functional.py", line 2237, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)
Stable diffusion model failed to load
Additional information
No response
this is fixed yesterday and make sure u are using latest version
I get the same error but only when not using the "DPM++ 2M KARRAS" (default) sampler. Here's me trying Euler A:
*** Error completing request | 0/1000 [00:00<?, ?it/s]
*** Arguments: ('task(8kwihrctx3p6fe0)', <gradio.routes.Request object at 0x000001F7DC0F65F0>, 'TEST', 'NEGPROMPT TEST', [], 20, 'Euler a', 50, 1, 6.5, 1024, 704, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', ['Downcast alphas_cumprod: True'], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, None, None, UiControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced'), UiControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced'), UiControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced'), 1.02, 0.99, 0.95, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0.5, 2, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 'Seed', '', None, 'Nothing', '', None, 'Nothing', '', None, 'True', False, False, False, False, False, False, 0, False, [], '') {}
Traceback (most recent call last):
File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\modules\txt2img.py", line 110, in txt2img
processed = processing.process_images(p)
File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\modules\processing.py", line 749, in process_images
res = process_images_inner(p)
File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\modules\processing.py", line 920, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\modules\processing.py", line 1275, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 251, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\modules\sd_samplers_common.py", line 260, in launch_sampling
return func()
File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 251, in <lambda>
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\repositories\k-diffusion\k_diffusion\sampling.py", line 149, in sample_euler_ancestral
d = to_d(x, sigmas[i], denoised)
File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\repositories\k-diffusion\k_diffusion\sampling.py", line 48, in to_d
return (x - denoised) / utils.append_dims(sigma, x.ndim)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
---
Got the same issue, here's is my log:
*** Error completing request | 2/30 [04:21<1:01:00, 130.74s/it]
*** Arguments: ('task(zu5zmuz9772263u)', <gradio.routes.Request object at 0x000001EE3BF62EC0>, 'Test', 'Negative Test', [], 30, 'Euler a', 1, 1, 7, 1024, 768, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, UiControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced'), UiControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced'), UiControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced'), False, 1.01, 1.02, 0.99, 0.95, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0.5, 2, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
Traceback (most recent call last):
File "E:\AIART\Stable Diffusion\stable-diffusion-webui-forge\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "E:\AIART\Stable Diffusion\stable-diffusion-webui-forge\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "E:\AIART\Stable Diffusion\stable-diffusion-webui-forge\modules\txt2img.py", line 110, in txt2img
processed = processing.process_images(p)
File "E:\AIART\Stable Diffusion\stable-diffusion-webui-forge\modules\processing.py", line 749, in process_images
res = process_images_inner(p)
File "E:\AIART\Stable Diffusion\stable-diffusion-webui-forge\modules\processing.py", line 920, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "E:\AIART\Stable Diffusion\stable-diffusion-webui-forge\modules\processing.py", line 1275, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "E:\AIART\Stable Diffusion\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 251, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "E:\AIART\Stable Diffusion\stable-diffusion-webui-forge\modules\sd_samplers_common.py", line 260, in launch_sampling
return func()
File "E:\AIART\Stable Diffusion\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 251, in <lambda>
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "E:\AIART\Stable Diffusion\stable-diffusion-webui-forge\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "E:\AIART\Stable Diffusion\stable-diffusion-webui-forge\repositories\k-diffusion\k_diffusion\sampling.py", line 149, in sample_euler_ancestral
d = to_d(x, sigmas[i], denoised)
File "E:\AIART\Stable Diffusion\stable-diffusion-webui-forge\repositories\k-diffusion\k_diffusion\sampling.py", line 48, in to_d
return (x - denoised) / utils.append_dims(sigma, x.ndim)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
---
@Diego0920 @LerkyBoy can you share full log or more info about your gpu speci
Sure:
venv "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: f0.0.9-latest-52-gb58b0bd4
Commit hash: b58b0bd4259cf71077dfd7787fb77af4c02760a1
Launching Web UI with arguments: --xformers
Total VRAM 3072 MB, total RAM 16336 MB
Trying to enable lowvram mode because your GPU seems to have 4GB or less. If you don't want this use: --always-normal-vram
WARNING:xformers:A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
xformers version: 0.0.23.post1
Set vram state to: LOW_VRAM
Device: cuda:0 NVIDIA GeForce GTX 1060 3GB : native
VAE dtype: torch.float32
Using xformers cross attention
ControlNet preprocessor location: J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\models\ControlNetPreprocessor
Loading weights [67ab2fd8ec] from J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\models\Stable-diffusion\v6.safetensors
2024-02-06 17:38:59,644 - ControlNet - INFO - ControlNet UI callback registered.
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
model_type EPS
UNet ADM Dimension 2816
Startup time: 25.4s (prepare environment: 8.3s, import torch: 7.6s, import gradio: 2.0s, setup paths: 1.2s, initialize shared: 0.2s, other imports: 1.3s, load scripts: 2.4s, create ui: 1.2s, gradio launch: 0.9s).
Using xformers attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using xformers attention in VAE
extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale'}
Loading VAE weights specified in settings: J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\models\VAE\sdxl_vae.safetensors
To load target model SDXLClipModel
Begin to load 1 model
Moving model(s) has taken 0.18 seconds
Model loaded in 94.9s (load weights from disk: 1.8s, forge load real models: 74.0s, forge set components: 0.5s, forge finalize: 3.2s, load VAE: 3.5s, load textual inversion embeddings: 3.4s, calculate empty prompt: 8.3s).
*** Error running process_before_every_sampling: J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\extensions-builtin\sd_forge_kohya_hrfix\scripts\kohya_hrfix.py
Traceback (most recent call last):
File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\modules\scripts.py", line 830, in process_before_every_sampling
script.process_before_every_sampling(p, *script_args, **kwargs)
File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\extensions-builtin\sd_forge_kohya_hrfix\scripts\kohya_hrfix.py", line 40, in process_before_every_sampling
unet = opPatchModelAddDownscale.patch(unet, block_number, downscale_factor, start_percent, end_percent, downscale_after_skip, downscale_method, upscale_method)[0]
File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\ldm_patched\contrib\external_model_downscale.py", line 26, in patch
sigma_end = model.model.model_sampling.percent_to_sigma(end_percent)
File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\ldm_patched\modules\model_sampling.py", line 86, in percent_to_sigma
if percent <= 0.0:
TypeError: '<=' not supported between instances of 'str' and 'float'
---
To load target model SDXL
Begin to load 1 model
loading in lowvram mode 788.1377696990967
Moving model(s) has taken 2.21 seconds
0%| | 0/20 [00:12<?, ?it/s]
*** Error completing request | 0/1000 [00:00<?, ?it/s]
*** Arguments: ('task(8kwihrctx3p6fe0)', <gradio.routes.Request object at 0x000001F7DC0F65F0>, 'TEST', 'NEGPROMPT TEST', [], 20, 'Euler a', 50, 1, 6.5, 1024, 704, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', ['Downcast alphas_cumprod: True'], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, None, None, UiControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced'), UiControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced'), UiControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced'), 1.02, 0.99, 0.95, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0.5, 2, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 'Seed', '', None, 'Nothing', '', None, 'Nothing', '', None, 'True', False, False, False, False, False, False, 0, False, [], '') {}
Traceback (most recent call last):
File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\modules\txt2img.py", line 110, in txt2img
processed = processing.process_images(p)
File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\modules\processing.py", line 749, in process_images
res = process_images_inner(p)
File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\modules\processing.py", line 920, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\modules\processing.py", line 1275, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 251, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\modules\sd_samplers_common.py", line 260, in launch_sampling
return func()
File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 251, in <lambda>
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\repositories\k-diffusion\k_diffusion\sampling.py", line 149, in sample_euler_ancestral
d = to_d(x, sigmas[i], denoised)
File "J:\STABLEDIFFUSIONFORGE\stable-diffusion-webui-forge\repositories\k-diffusion\k_diffusion\sampling.py", line 48, in to_d
return (x - denoised) / utils.append_dims(sigma, x.ndim)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
---
@Diego0920 @LerkyBoy update and try again?
No luck
https://pastebin.com/UzVnseFQ
update and try again? @Diego0920
same problem, --always-gpu worked previously, now it doesn't work with or without it. LCM Gives this error LCM Karras does work however, even without --always-gpu
venv "C:\StableDiffusion\stable-diffusion-webui-forge\venv\Scripts\Python.exe" Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: f0.0.10-latest-63-gd11c9d75 Commit hash: d11c9d75064b93b988a8a029f8361056262cd674 Launching Web UI with arguments: --listen --xformers --always-gpu --ckpt-dir C:/StableDiffusion/StableDiffusionModels --vae-dir C:/StableDiffusion/StableDiffusionVae --lora-dir C:/StableDiffusion/StableDiffusionLora Total VRAM 2048 MB, total RAM 8045 MB Trying to enable lowvram mode because your GPU seems to have 4GB or less. If you don't want this use: --always-normal-vram WARNING:xformers:A matching Triton is not available, some optimizations will not be enabled. Error caught was: No module named 'triton' xformers version: 0.0.23.post1 Set vram state to: LOW_VRAM Device: cuda:0 NVIDIA GeForce GTX 1050 : native VAE dtype: torch.float32 Using xformers cross attention ControlNet preprocessor location: C:\StableDiffusion\stable-diffusion-webui-forge\models\ControlNetPreprocessor [-] ADetailer initialized. version: 24.1.2, num models: 9 Loading weights [fd02fa0c85] from C:/StableDiffusion/StableDiffusionModels\LCM\CDMTRnRv3LCM02.safetensors 2024-02-07 09:41:42,259 - ControlNet - INFO - ControlNet UI callback registered. model_type EPS UNet ADM Dimension 0 Running on local URL: http://0.0.0.0:7860
To create a public link, set share=True
in launch()
.
Startup time: 24.2s (prepare environment: 5.0s, import torch: 5.8s, import gradio: 1.5s, setup paths: 0.9s, initialize shared: 0.2s, other imports: 0.9s, list SD models: 0.9s, load scripts: 3.6s, create ui: 1.1s, gradio launch: 4.3s).
Using xformers attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using xformers attention in VAE
extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'}
Loading VAE weights specified in settings: C:/StableDiffusion/StableDiffusionVae\vaeftmse840000emapruned.ckpt
To load target model SD1ClipModel
Begin to load 1 model
Moving model(s) has taken 0.19 seconds
Model loaded in 10.6s (load weights from disk: 0.8s, forge load real models: 8.2s, load VAE: 0.8s, calculate empty prompt: 0.7s).
To load target model BaseModel
Begin to load 1 model
loading in lowvram mode 204.01322174072266
Moving model(s) has taken 0.23 seconds
*** Error completing request
*** Arguments: ('task(onp897k9aqha9ky)', <gradio.routes.Request object at 0x000002ACCF5BF130>, '1girl, solo, cyberpunk', '', [], 8, 'LCM', 1, 1, 2, 768, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, UiControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced'), UiControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced'), UiControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced'), True, 1.01, 1.02, 0.99, 0.95, True, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0.5, 2, False, True, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.5, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': True, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
Traceback (most recent call last):
File "C:\StableDiffusion\stable-diffusion-webui-forge\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "C:\StableDiffusion\stable-diffusion-webui-forge\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "C:\StableDiffusion\stable-diffusion-webui-forge\modules\txt2img.py", line 110, in txt2img
processed = processing.process_images(p)
File "C:\StableDiffusion\stable-diffusion-webui-forge\modules\processing.py", line 749, in process_images
res = process_images_inner(p)
File "C:\StableDiffusion\stable-diffusion-webui-forge\modules\processing.py", line 920, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "C:\StableDiffusion\stable-diffusion-webui-forge\modules\processing.py", line 1275, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "C:\StableDiffusion\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 214, in sample
sigmas = self.get_sigmas(p, steps).to(shared.device)
File "C:\StableDiffusion\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 136, in get_sigmas
sigmas = self.model_wrap.get_sigmas(steps)
File "C:\StableDiffusion\stable-diffusion-webui-forge\modules\sd_samplers_lcm.py", line 32, in get_sigmas
return sampling.append_zero(self.t_to_sigma(t))
File "C:\StableDiffusion\stable-diffusion-webui-forge\modules\sd_samplers_lcm.py", line 43, in t_to_sigma
return super().t_to_sigma(t)
File "C:\StableDiffusion\stable-diffusion-webui-forge\repositories\k-diffusion\k_diffusion\external.py", line 83, in t_to_sigma
log_sigma = (1 - w) * self.log_sigmas[low_idx] + w * self.log_sigmas[high_idx]
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
Can confirm it now works with Euler A Great job m8 Rest of the people don't forget to git pull