ComfyUI-IDM-VTON icon indicating copy to clipboard operation
ComfyUI-IDM-VTON copied to clipboard

Error occurred when executing PipelineLoader: Allocation on device

Open Be-coder opened this issue 1 year ago • 7 comments

微信图片_20240521095912 微信图片_20240521101325

What is the problem? Is there not enough memory? Do you currently have a suitable solution? Please, thank you!!!

Be-coder avatar May 21 '24 02:05 Be-coder

Hi, thanks for your report, can you show the full trace?

TemryL avatar May 21 '24 02:05 TemryL

微信图片_20240521095912 微信图片_20240521100149

Here! And weight_dtype: float16. thank you very much.

Be-coder avatar May 21 '24 02:05 Be-coder

Most likely OOM exception as https://github.com/TemryL/ComfyUI-IDM-VTON/issues/4. Can you specify your GPU config?

TemryL avatar May 21 '24 14:05 TemryL

it is RTX 4060,VRAM:8G,RAM:16GB. The memory usage during runtime has reached its highest level, while the graphics memory usage is not high. Can my configuration run your code?

Be-coder avatar May 22 '24 07:05 Be-coder

Hi. Same error for me. Running on RX4080 16GB Vram (2GB used by system) image

GPU VRAM is filled up in 10 seconds. image

AfterHAL avatar May 22 '24 18:05 AfterHAL

+1 GPU 3090 24G VRAM。Below is the error info

Error occurred when executing IDM-VTON:

Allocation on device

File "C:\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "C:\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "C:\ComfyUI\custom_nodes\ComfyUI-IDM-VTON\src\nodes\idm_vton.py", line 100, in make_inference images = pipeline( File "C:\ComfyUI\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\ComfyUI\custom_nodes\ComfyUI-IDM-VTON\src\idm_vton\tryon_pipeline.py", line 1630, in call mask, masked_image_latents = self.prepare_mask_latents( File "C:\ComfyUI\custom_nodes\ComfyUI-IDM-VTON\src\idm_vton\tryon_pipeline.py", line 961, in prepare_mask_latents masked_image_latents = self._encode_vae_image(masked_image, generator=generator) File "C:\ComfyUI\custom_nodes\ComfyUI-IDM-VTON\src\idm_vton\tryon_pipeline.py", line 921, in _encode_vae_image image_latents = retrieve_latents(self.vae.encode(image), generator=generator) File "C:\ComfyUI\python\lib\site-packages\diffusers\utils\accelerate_utils.py", line 46, in wrapper return method(self, *args, **kwargs) File "C:\ComfyUI\python\lib\site-packages\diffusers\models\autoencoders\autoencoder_kl.py", line 260, in encode h = self.encoder(x) File "C:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "C:\ComfyUI\python\lib\site-packages\diffusers\models\autoencoders\vae.py", line 172, in forward sample = down_block(sample) File "C:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "C:\ComfyUI\python\lib\site-packages\diffusers\models\unets\unet_2d_blocks.py", line 1465, in forward hidden_states = resnet(hidden_states, temb=None) File "C:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "C:\ComfyUI\python\lib\site-packages\diffusers\models\resnet.py", line 332, in forward hidden_states = self.norm1(hidden_states) File "C:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "C:\ComfyUI\python\lib\site-packages\torch\nn\modules\normalization.py", line 287, in forward return F.group_norm( File "C:\ComfyUI\python\lib\site-packages\torch\nn\functional.py", line 2588, in group_norm return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)

Pythonpa avatar Jun 17 '24 13:06 Pythonpa

Hello,

Same error here The Pipeline Loader is very slow and fail after 2-5mn then fail in Out Of Memory error

image


!!! Exception during processing!!! Allocation on device Traceback (most recent call last): File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-IDM-VTON\src\nodes\pipeline_loader.py", line 68, in load_pipeline ).requires_grad_(False).eval().to(DEVICE) ^^^^^^^^^^ File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1173, in to return self._apply(convert) ^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 779, in _apply module._apply(fn) File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 779, in _apply module._apply(fn) File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 779, in _apply module._apply(fn) [Previous line repeated 5 more times] File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 804, in _apply param_applied = fn(param) ^^^^^^^^^ File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1159, in convert return t.to( ^^^^^ torch.cuda.OutOfMemoryError: Allocation on device

ZombieNeighbor avatar Jul 17 '24 10:07 ZombieNeighbor