ComfyUI
ComfyUI copied to clipboard
SDXL CLIPTextEncode: User specified autocast device_type must be 'cuda' or 'cpu'
python main.py --normalvram --use-quad-cross-attention --auto-launch --disable-smart-memory --preview-method latent2rgb --dont-upcast-attention
(model is automatically loaded in lowvram mode due to 4GB of VRAM.)
Total VRAM 4096 MB, total RAM 15943 MB
Set vram state to: NORMAL_VRAM
Disabling smart memory management
Device: cuda:0 AMD Radeon RX 570 Series : native
VAE dtype: torch.float32
disabling upcasting of attention
Happened somewhat randomly when using an SDXL model where I was running my workflow without any issues, then this started happening more or less consistently.
Happens with other TextEncode nodes from custom nodes as well.
Error occurred when executing CLIPTextEncode:
User specified autocast device_type must be 'cuda' or 'cpu'
File "/home/rabid/Desktop/comfytwoai/execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rabid/Desktop/comfytwoai/execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rabid/Desktop/comfytwoai/execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rabid/Desktop/comfytwoai/nodes.py", line 56, in encode
cond, pooled = clip.encode_from_tokens(tokens, return_pooled=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rabid/Desktop/comfytwoai/comfy/sd.py", line 120, in encode_from_tokens
cond, pooled = self.cond_stage_model.encode_token_weights(tokens)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rabid/Desktop/comfytwoai/comfy/sdxl_clip.py", line 56, in encode_token_weights
g_out, g_pooled = self.clip_g.encode_token_weights(token_weight_pairs_g)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rabid/Desktop/comfytwoai/comfy/sd1_clip.py", line 18, in encode_token_weights
out, pooled = self.encode(to_encode)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/rabid/Desktop/comfytwoai/comfy/sd1_clip.py", line 179, in encode
return self(tokens)
^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rabid/Desktop/comfytwoai/venv/lib/python3.11/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rabid/Desktop/comfytwoai/comfy/sd1_clip.py", line 150, in forward
with precision_scope(model_management.get_autocast_device(device), torch.float32):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/torch/amp/autocast_mode.py", line 201, in __init__
raise RuntimeError('User specified autocast device_type must be \'cuda\' or \'cpu\'')
Loading 1 new model
model_type EPS
adm 2816
Using split attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using split attention in VAE
missing {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'}
left over keys: dict_keys(['conditioner.embedders.0.logit_scale', 'conditioner.embedders.0.text_projection', 'cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids'])
Requested to load SDXLClipModel
Loading 1 new model
loading in lowvram mode 445.0304231643677
ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
File "/home/rabid/Desktop/comfytwoai/execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rabid/Desktop/comfytwoai/execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rabid/Desktop/comfytwoai/execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rabid/Desktop/comfytwoai/nodes.py", line 56, in encode
cond, pooled = clip.encode_from_tokens(tokens, return_pooled=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rabid/Desktop/comfytwoai/comfy/sd.py", line 120, in encode_from_tokens
cond, pooled = self.cond_stage_model.encode_token_weights(tokens)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rabid/Desktop/comfytwoai/comfy/sdxl_clip.py", line 56, in encode_token_weights
g_out, g_pooled = self.clip_g.encode_token_weights(token_weight_pairs_g)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rabid/Desktop/comfytwoai/comfy/sd1_clip.py", line 18, in encode_token_weights
out, pooled = self.encode(to_encode)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/rabid/Desktop/comfytwoai/comfy/sd1_clip.py", line 179, in encode
return self(tokens)
^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rabid/Desktop/comfytwoai/venv/lib/python3.11/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rabid/Desktop/comfytwoai/comfy/sd1_clip.py", line 150, in forward
with precision_scope(model_management.get_autocast_device(device), torch.float32):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/torch/amp/autocast_mode.py", line 201, in __init__
raise RuntimeError('User specified autocast device_type must be \'cuda\' or \'cpu\'')
RuntimeError: User specified autocast device_type must be 'cuda' or 'cpu'
Changing to a SD 1.5 model on the same workflow in the same session where the error happened = works with no problems.
I think it might have to do with --normalvram and not manually specifying --lowvram for SDXL that is beyond my VRAM limit. If I set --lowvram myself for the purpose of using SDXL models, this autocast issue seems to stop. (not thoroughly tested yet) However if everything is manually set to lowvram on my system, that includes the steps of the workflow that don't have to be offloaded to RAM and ends up making things very sluggish and slow as it moves things around.
Editing line 150 of sd1_clip.py to
with precision_scope(model_management.get_autocast_device("cuda"), torch.float32):
seems to resolve the autocast issue when on --normalvram and using a checkpoint that is large enough to trigger lowvram load. (And solving the issue with too much being offloaded to RAM when I'm using ControlNets)
got prompt
INFO:comfyui-prompt-control:Resolving wildcards...
model_type EPS
adm 2816
Using split attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using split attention in VAE
missing {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
left over keys: dict_keys(['conditioner.embedders.0.logit_scale', 'conditioner.embedders.0.text_projection', 'cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids'])
Requested to load SDXLClipModel
Loading 1 new model
Requested to load SDXL
Loading 1 new model
loading in lowvram mode 2338.4615383148193
100%|████████████████████████████████████████████████████████████████████████████████████| 12/12 [01:52<00:00, 9.37s/it]
Prompt executed in 176.74 seconds
got prompt
INFO:comfyui-prompt-control:Resolving wildcards...
Requested to load CLIPVisionModelWithProjection
Loading 1 new model
WARNING:accelerate.big_modeling:You shouldn't move a model when it is dispatched on multiple devices.
Requested to load SDXL
Loading 1 new model
loading in lowvram mode 1689.6255254745483
100%|████████████████████████████████████████████████████████████████████████████████████| 12/12 [03:34<00:00, 17.84s/it]
Prompt executed in 399.69 seconds (It was taking 400+ seconds to even start inference with --lowvram on this workflow including IP-Adapter.)
Hi, could you try "Instant LoRA" Comfy project? It keeps showing me a CUDA TypeError DirectML probably on IP-Adapter with AMD RX 480 8GB. I haven't had a problem without IP-Adapter yet.
Instead of --DirectML command, should I use the commands like you used or I'll just try --normalvram? RX 570/580 and RX 470/480 are similar.
Instant LoRA and other for ComfyUI: https://civitai.com/articles/2345/aloeveras-instant-lora-no-training-15-read-new-info https://github.com/nerdyrodent/AVeryComfyNerd
I have a similar question.
(venv) (base) ➜ ComfyUI git:(master) ✗ python main.py
** ComfyUI start up time: 2023-12-05 08:45:24.199387
Prestartup times for custom nodes: 0.0 seconds: /Users/moxixuan/Code/ComfyUI/custom_nodes/ComfyUI-Manager
Total VRAM 32768 MB, total RAM 32768 MB Set vram state to: SHARED Device: mps VAE dtype: torch.float32 Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention Adding extra search path checkpoints /Users/moxixuan/Code/stable-diffusion-webui/models/Stable-diffusion Adding extra search path configs /Users/moxixuan/Code/stable-diffusion-webui/models/Stable-diffusion Adding extra search path vae /Users/moxixuan/Code/stable-diffusion-webui/models/VAE Adding extra search path loras /Users/moxixuan/Code/stable-diffusion-webui/models/Lora Adding extra search path loras /Users/moxixuan/Code/stable-diffusion-webui/models/LyCORIS Adding extra search path upscale_models /Users/moxixuan/Code/stable-diffusion-webui/models/ESRGAN Adding extra search path upscale_models /Users/moxixuan/Code/stable-diffusion-webui/models/RealESRGAN Adding extra search path upscale_models /Users/moxixuan/Code/stable-diffusion-webui/models/SwinIR Adding extra search path embeddings /Users/moxixuan/Code/stable-diffusion-webui/embeddings Adding extra search path hypernetworks /Users/moxixuan/Code/stable-diffusion-webui/models/hypernetworks Adding extra search path controlnet /Users/moxixuan/Code/stable-diffusion-webui/extensions/sd-webui-controlnet/models
Loading: ComfyUI-Manager (V1.5.2)
ComfyUI Revision: 1778 [26b1c0a7] | Released on '2023-12-04'
[comfyui_controlnet_aux] | INFO -> Using ckpts path: /Users/moxixuan/Code/ComfyUI/custom_nodes/comfyui_controlnet_aux/ckpts /Users/moxixuan/Code/ComfyUI/custom_nodes/comfyui_controlnet_aux/node_wrappers/dwpose.py:24: UserWarning: DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly warnings.warn("DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly") Downloading anime face detector... Failed to download lbpcascade_animeface.xml so please download it in /Users/moxixuan/Code/ComfyUI/custom_nodes/IPAdapter-ComfyUI.
Import times for custom nodes: 0.0 seconds: /Users/moxixuan/Code/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus 0.0 seconds: /Users/moxixuan/Code/ComfyUI/custom_nodes/IPAdapter-ComfyUI 0.0 seconds: /Users/moxixuan/Code/ComfyUI/custom_nodes/AIGODLIKE-COMFYUI-TRANSLATION 0.1 seconds: /Users/moxixuan/Code/ComfyUI/custom_nodes/ComfyUI-Manager 0.5 seconds: /Users/moxixuan/Code/ComfyUI/custom_nodes/comfyui_controlnet_aux
Starting server
To see the GUI go to: http://127.0.0.1:8188
got prompt /Users/moxixuan/Code/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() return self.fget.get(instance, owner)() model_type EPS adm 0 Using split attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using split attention in VAE missing {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'} left over keys: dict_keys(['model_ema.decay', 'model_ema.diffusion_modelinput_blocks00bias', 'model_ema.diffusion_modelinput_blocks00weight', 'model_ema.diffusion_modelinput_blocks100emb_layers1bias', 'model_ema.diffusion_modelinput_blocks100emb_layers1weight', 'model_ema.diffusion_modelinput_blocks100in_layers0bias', 'model_ema.diffusion_modelin Requested to load CLIPVisionModelWithProjection Loading 1 new model Requested to load SD1ClipModel Loading 1 new model Requested to load BaseModel Loading 1 new model 0%| | 0/20 [00:00<?, ?it/s] ERROR:root:!!! Exception during processing !!! ERROR:root:Traceback (most recent call last): File "/Users/moxixuan/Code/ComfyUI/execution.py", line 153, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/moxixuan/Code/ComfyUI/execution.py", line 83, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/moxixuan/Code/ComfyUI/execution.py", line 76, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/moxixuan/Code/ComfyUI/nodes.py", line 1299, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/moxixuan/Code/ComfyUI/nodes.py", line 1269, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/moxixuan/Code/ComfyUI/comfy/sample.py", line 100, in sample samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/moxixuan/Code/ComfyUI/comfy/samplers.py", line 711, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/moxixuan/Code/ComfyUI/comfy/samplers.py", line 617, in sample samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/moxixuan/Code/ComfyUI/comfy/samplers.py", line 556, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/moxixuan/Code/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/Users/moxixuan/Code/ComfyUI/comfy/k_diffusion/sampling.py", line 137, in sample_euler denoised = model(x, sigma_hat * s_in, **extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/moxixuan/Code/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/moxixuan/Code/ComfyUI/comfy/samplers.py", line 277, in forward out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/moxixuan/Code/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/moxixuan/Code/ComfyUI/comfy/samplers.py", line 267, in forward return self.apply_model(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/moxixuan/Code/ComfyUI/comfy/samplers.py", line 264, in apply_model out = sampling_function(self.inner_model, x, timestep, uncond, cond, cond_scale, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/moxixuan/Code/ComfyUI/comfy/samplers.py", line 252, in sampling_function cond, uncond = calc_cond_uncond_batch(model, cond, uncond, x, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/moxixuan/Code/ComfyUI/comfy/samplers.py", line 230, in calc_cond_uncond_batch output = model.apply_model(input_x, timestep, **c).chunk(batch_chunks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/moxixuan/Code/ComfyUI/comfy/model_base.py", line 83, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/moxixuan/Code/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/moxixuan/Code/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 854, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/moxixuan/Code/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 46, in forward_timestep_embed x = layer(x, context, transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/moxixuan/Code/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/moxixuan/Code/ComfyUI/comfy/ldm/modules/attention.py", line 590, in forward x = block(x, context=context[i], transformer_options=transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/moxixuan/Code/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/moxixuan/Code/ComfyUI/comfy/ldm/modules/attention.py", line 417, in forward return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/moxixuan/Code/ComfyUI/comfy/ldm/modules/diffusionmodules/util.py", line 190, in checkpoint return func(*inputs) ^^^^^^^^^^^^^ File "/Users/moxixuan/Code/ComfyUI/comfy/ldm/modules/attention.py", line 514, in _forward n = attn2_replace_patch[block_attn2](n, context_attn2, value_attn2, extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/moxixuan/Code/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/IPAdapterPlus.py", line 251, in call with torch.autocast(device_type=self.device.type, dtype=self.dtype): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/moxixuan/Code/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/amp/autocast_mode.py", line 201, in init raise RuntimeError('User specified autocast device_type must be 'cuda' or 'cpu'') RuntimeError: User specified autocast device_type must be 'cuda' or 'cpu'
I am also having an issue with SDXL ClipTextEncoders
crashing with RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
when using unClipConditioning
nodes.
I am running in LowVRAM mode.
@TheDudeFromCI that specific issue should be fixed now.
@TheDudeFromCI that specific issue should be fixed now.
I am using the latest version of ComfyUI, as far as I'm aware. I pulled from main two days ago.
Pull again.
Pull again.
Oh, thank you! It's working as intended now.