ComfyUI
ComfyUI copied to clipboard
RuntimeError: CUDA error: invalid argument
start server: python3 main.py --listen
Total VRAM 32510 MB, total RAM 51200 MB
Set vram state to: NORMAL_VRAM
Device: cuda:0 Tesla V100-PCIE-32GB : cudaMallocAsync
VAE dtype: torch.float32
Using pytorch cross attention
Starting server
To see the GUI go to: http://0.0.0.0:8188
got prompt
model_type EPS
Using pytorch attention in VAE
Using pytorch attention in VAE
clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight']
Requested to load SD1ClipModel
Loading 1 new model
!!! Exception during processing !!!
Traceback (most recent call last):
File "/code/ComfyUI/execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/code/ComfyUI/execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/code/ComfyUI/execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/code/ComfyUI/nodes.py", line 57, in encode
cond, pooled = clip.encode_from_tokens(tokens, return_pooled=True)
File "/code/ComfyUI/comfy/sd.py", line 135, in encode_from_tokens
self.load_model()
File "/code/ComfyUI/comfy/sd.py", line 155, in load_model
model_management.load_model_gpu(self.patcher)
File "/code/ComfyUI/comfy/model_management.py", line 442, in load_model_gpu
return load_models_gpu([model])
File "/code/ComfyUI/comfy/model_management.py", line 397, in load_models_gpu
free_memory(extra_mem, d, models_already_loaded)
File "/code/ComfyUI/comfy/model_management.py", line 355, in free_memory
if get_free_memory(device) > memory_required:
File "/code/ComfyUI/comfy/model_management.py", line 680, in get_free_memory
stats = torch.cuda.memory_stats(dev)
File "/python3.10/site-packages/torch/cuda/memory.py", line 230, in memory_stats
stats = memory_stats_as_nested_dict(device=device)
File "/python3.10/site-packages/torch/cuda/memory.py", line 242, in memory_stats_as_nested_dict
return torch._C._cuda_memoryStats(device)
RuntimeError: CUDA error: invalid argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
torch 2.0.1+cu118
nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:33:58_PDT_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0
same error
same error