ComfyUI icon indicating copy to clipboard operation
ComfyUI copied to clipboard

Error occurred when executing VAEDecode:

Open Vortex1x1x1x opened this issue 1 year ago • 3 comments

When i submit my prompt everything works until it gets to the VAEDecode node (im running the default workspace right now) it stops and i get this message:

Error occurred when executing VAEDecode:

HIP out of memory. Tried to allocate 2.25 GiB. GPU 0 has a total capacty of 7.98 GiB of which 1.36 GiB is free. Of the allocated memory 6.06 GiB is allocated by PyTorch, and 190.07 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_HIP_ALLOC_CONF

File "/home/vortex1x1x1x/ComfyUI/execution.py", line 154, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/home/vortex1x1x1x/ComfyUI/execution.py", line 84, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/home/vortex1x1x1x/ComfyUI/execution.py", line 77, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "/home/vortex1x1x1x/ComfyUI/nodes.py", line 267, in decode return (vae.decode(samples["samples"]), ) File "/home/vortex1x1x1x/ComfyUI/comfy/sd.py", line 240, in decode pixel_samples = self.decode_tiled_(samples_in) File "/home/vortex1x1x1x/ComfyUI/comfy/sd.py", line 209, in decode_tiled_ comfy.utils.tiled_scale(samples, decode_fn, tile_x, tile_y, overlap, upscale_amount = 8, output_device=self.output_device, pbar = pbar)) File "/home/vortex1x1x1x/ComfyUI/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/vortex1x1x1x/ComfyUI/comfy/utils.py", line 418, in tiled_scale ps = function(s_in).to(output_device) File "/home/vortex1x1x1x/ComfyUI/comfy/sd.py", line 205, in decode_fn = lambda a: (self.first_stage_model.decode(a.to(self.vae_dtype).to(self.device)) + 1.0).float() File "/home/vortex1x1x1x/ComfyUI/comfy/ldm/models/autoencoder.py", line 202, in decode dec = self.decoder(dec, **decoder_kwargs) File "/home/vortex1x1x1x/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/vortex1x1x1x/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/vortex1x1x1x/ComfyUI/comfy/ldm/modules/diffusionmodules/model.py", line 639, in forward h = self.up[i_level].upsample(h) File "/home/vortex1x1x1x/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/vortex1x1x1x/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/vortex1x1x1x/ComfyUI/comfy/ldm/modules/diffusionmodules/model.py", line 72, in forward x = self.conv(x) File "/home/vortex1x1x1x/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/vortex1x1x1x/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/vortex1x1x1x/ComfyUI/comfy/ops.py", line 43, in forward return super().forward(*args, **kwargs) File "/home/vortex1x1x1x/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 460, in forward return self._conv_forward(input, self.weight, self.bias) File "/home/vortex1x1x1x/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 456, in _conv_forward return F.conv2d(input, weight, bias, self.stride,

i have an rx 7600 which has 8gb of vrram and i have 16gb of ram

Vortex1x1x1x avatar Jan 01 '24 16:01 Vortex1x1x1x

Hey, I am having the same issue,

Error occurred when executing VAEDecode:

Allocation on device 0 would exceed allowed memory. (out of memory) Currently allocated : 1.26 GiB Requested : 256.00 MiB Device limit : 2.00 GiB Free (according to CUDA): 0 bytes PyTorch limit (set by user-supplied memory fraction) : 17179869184.00 GiB

File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\execution.py", line 154, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\execution.py", line 84, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\execution.py", line 77, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\nodes.py", line 267, in decode return (vae.decode(samples["samples"]), ) File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\comfy\sd.py", line 240, in decode pixel_samples = self.decode_tiled_(samples_in) File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\comfy\sd.py", line 207, in decode_tiled_ (comfy.utils.tiled_scale(samples, decode_fn, tile_x // 2, tile_y * 2, overlap, upscale_amount = 8, output_device=self.output_device, pbar = pbar) + File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\comfy\utils.py", line 418, in tiled_scale ps = function(s_in).to(output_device) File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\comfy\sd.py", line 205, in decode_fn = lambda a: (self.first_stage_model.decode(a.to(self.vae_dtype).to(self.device)) + 1.0).float() File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\comfy\ldm\models\autoencoder.py", line 202, in decode dec = self.decoder(dec, **decoder_kwargs) File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\comfy\ldm\modules\diffusionmodules\model.py", line 639, in forward h = self.up[i_level].upsample(h) File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\comfy\ldm\modules\diffusionmodules\model.py", line 72, in forward x = self.conv(x) File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\comfy\ops.py", line 43, in forward return super().forward(*args, **kwargs) File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\venv\lib\site-packages\torch\nn\modules\conv.py", line 460, in forward return self._conv_forward(input, self.weight, self.bias) File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\venv\lib\site-packages\torch\nn\modules\conv.py", line 456, in _conv_forward return F.conv2d(input, weight, bias, self.stride,

I have GTX 1050TI and 16gm ram

Yigitera avatar Jan 02 '24 16:01 Yigitera

I hade to switch to the VAEDecode tile node it is working now

On Tue, Jan 2, 2024 at 11:06 AM Yigitera @.***> wrote:

Hey, I am having the same issue,

Error occurred when executing VAEDecode:

Allocation on device 0 would exceed allowed memory. (out of memory) Currently allocated : 1.26 GiB Requested : 256.00 MiB Device limit : 2.00 GiB Free (according to CUDA): 0 bytes PyTorch limit (set by user-supplied memory fraction) : 17179869184.00 GiB

File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\execution.py", line 154, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\execution.py", line 84, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\execution.py", line 77, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\nodes.py", line 267, in decode return (vae.decode(samples["samples"]), ) File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\comfy\sd.py", line 240, in decode pixel_samples = self.decode_tiled_(samples_in) File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\comfy\sd.py", line 207, in decode_tiled_ (comfy.utils.tiled_scale(samples, decode_fn, tile_x // 2, tile_y * 2, overlap, upscale_amount = 8, output_device=self.output_device, pbar = pbar) + File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\comfy\utils.py", line 418, in tiled_scale ps = function(s_in).to(output_device) File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\comfy\sd.py", line 205, in decode_fn = lambda a: (self.first_stage_model.decode(a.to(self.vae_dtype).to(self.device))

  • 1.0).float() File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\comfy\ldm\models\autoencoder.py", line 202, in decode dec = self.decoder(dec, **decoder_kwargs) File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\comfy\ldm\modules\diffusionmodules\model.py", line 639, in forward h = self.up[i_level].upsample(h) File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\comfy\ldm\modules\diffusionmodules\model.py", line 72, in forward x = self.conv(x) File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\comfy\ops.py", line 43, in forward return super().forward(*args, **kwargs) File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\venv\lib\site-packages\torch\nn\modules\conv.py", line 460, in forward return self._conv_forward(input, self.weight, self.bias) File "D:\StabilityMatrix-win-x64\Data\Packages\ComfyUI\venv\lib\site-packages\torch\nn\modules\conv.py", line 456, in _conv_forward return F.conv2d(input, weight, bias, self.stride,

I have GTX 1050TI and 16gm ram

— Reply to this email directly, view it on GitHub https://github.com/comfyanonymous/ComfyUI/issues/2431#issuecomment-1874221708, or unsubscribe https://github.com/notifications/unsubscribe-auth/BDANVXN5TZNLPFPGG2EESRDYMQWA3AVCNFSM6AAAAABBJEXWKWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNZUGIZDCNZQHA . You are receiving this because you authored the thread.Message ID: @.***>

Vortex1x1x1x avatar Jan 02 '24 16:01 Vortex1x1x1x

I'm having a similar issue with --lowvram, in the past I used SDXL and SD1.5 successfully with a1111

comfyui-rocm  | Requested to load SDXLClipModel
comfyui-rocm  | Loading 1 new model
comfyui-rocm  | Requested to load SDXL
comfyui-rocm  | Loading 1 new model
comfyui-rocm  | loading in lowvram mode 64.0
100%|██████████| 5/5 [01:25<00:00, 17.06s/it]
comfyui-rocm  | Requested to load AutoencoderKL
comfyui-rocm  | Loading 1 new model
comfyui-rocm  | loading in lowvram mode 64.0
comfyui-rocm  | Warning: Ran out of memory when regular VAE decoding, retrying with tiled VAE decoding.
comfyui-rocm  | !!! Exception during processing!!! HIP out of memory. Tried to allocate 288.00 MiB. GPU 0 has a total capacty of 512.00 MiB of which 17179869184.00 GiB is free. Of the allocated memory 243.69 MiB is allocated by PyTorch, and 20.31 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_HIP_ALLOC_CONF
comfyui-rocm  | Traceback (most recent call last):
comfyui-rocm  |   File "/root/ComfyUI/comfy/sd.py", line 312, in decode
comfyui-rocm  |     pixel_samples[x:x+batch_number] = self.process_output(self.first_stage_model.decode(samples).to(self.output_device).float())
comfyui-rocm  |   File "/root/ComfyUI/comfy/ldm/models/autoencoder.py", line 200, in decode
comfyui-rocm  |     dec = self.decoder(dec, **decoder_kwargs)
comfyui-rocm  |   File "/usr/lib64/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
comfyui-rocm  |     return self._call_impl(*args, **kwargs)
comfyui-rocm  |   File "/usr/lib64/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
comfyui-rocm  |     return forward_call(*args, **kwargs)
comfyui-rocm  |   File "/root/ComfyUI/comfy/ldm/modules/diffusionmodules/model.py", line 639, in forward
comfyui-rocm  |     h = self.up[i_level].upsample(h)
comfyui-rocm  |   File "/usr/lib64/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
comfyui-rocm  |     return self._call_impl(*args, **kwargs)
comfyui-rocm  |   File "/usr/lib64/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
comfyui-rocm  |     return forward_call(*args, **kwargs)
comfyui-rocm  |   File "/root/ComfyUI/comfy/ldm/modules/diffusionmodules/model.py", line 72, in forward
comfyui-rocm  |     x = self.conv(x)
comfyui-rocm  |   File "/usr/lib64/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
comfyui-rocm  |     return self._call_impl(*args, **kwargs)
comfyui-rocm  |   File "/usr/lib64/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
comfyui-rocm  |     return forward_call(*args, **kwargs)
comfyui-rocm  |   File "/root/ComfyUI/comfy/ops.py", line 78, in forward
comfyui-rocm  |     return self.forward_comfy_cast_weights(*args, **kwargs)
comfyui-rocm  |   File "/root/ComfyUI/comfy/ops.py", line 74, in forward_comfy_cast_weights
comfyui-rocm  |     return self._conv_forward(input, weight, bias)
comfyui-rocm  |   File "/usr/lib64/python3.10/site-packages/torch/nn/modules/conv.py", line 456, in _conv_forward
comfyui-rocm  |     return F.conv2d(input, weight, bias, self.stride,
comfyui-rocm  | torch.cuda.OutOfMemoryError: HIP out of memory. Tried to allocate 288.00 MiB. GPU 0 has a total capacty of 512.00 MiB of which 94.00 MiB is free. Of the allocated memory 145.38 MiB is allocated by PyTorch, and 22.62 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_HIP_ALLOC_CONF
comfyui-rocm  | 
comfyui-rocm  | During handling of the above exception, another exception occurred:
comfyui-rocm  | 
comfyui-rocm  | Traceback (most recent call last):
comfyui-rocm  |   File "/root/ComfyUI/execution.py", line 151, in recursive_execute
comfyui-rocm  |     output_data, output_ui = get_output_data(obj, input_data_all)
comfyui-rocm  |   File "/root/ComfyUI/execution.py", line 81, in get_output_data
comfyui-rocm  |     return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
comfyui-rocm  |   File "/root/ComfyUI/execution.py", line 74, in map_node_over_list
comfyui-rocm  |     results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
comfyui-rocm  |   File "/root/ComfyUI/nodes.py", line 270, in decode
comfyui-rocm  |     return (vae.decode(samples["samples"]), )
comfyui-rocm  |   File "/root/ComfyUI/comfy/sd.py", line 318, in decode
comfyui-rocm  |     pixel_samples = self.decode_tiled_(samples_in)
comfyui-rocm  |   File "/root/ComfyUI/comfy/sd.py", line 274, in decode_tiled_
comfyui-rocm  |     (comfy.utils.tiled_scale(samples, decode_fn, tile_x // 2, tile_y * 2, overlap, upscale_amount = self.upscale_ratio, output_device=self.output_device, pbar = pbar) +
comfyui-rocm  |   File "/root/ComfyUI/comfy/utils.py", line 555, in tiled_scale
comfyui-rocm  |     return tiled_scale_multidim(samples, function, (tile_y, tile_x), overlap, upscale_amount, out_channels, output_device, pbar)
comfyui-rocm  |   File "/usr/lib64/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
comfyui-rocm  |     return func(*args, **kwargs)
comfyui-rocm  |   File "/root/ComfyUI/comfy/utils.py", line 529, in tiled_scale_multidim
comfyui-rocm  |     ps = function(s_in).to(output_device)
comfyui-rocm  |   File "/root/ComfyUI/comfy/sd.py", line 272, in <lambda>
comfyui-rocm  |     decode_fn = lambda a: self.first_stage_model.decode(a.to(self.vae_dtype).to(self.device)).float()
comfyui-rocm  |   File "/root/ComfyUI/comfy/ldm/models/autoencoder.py", line 200, in decode
comfyui-rocm  |     dec = self.decoder(dec, **decoder_kwargs)
comfyui-rocm  |   File "/usr/lib64/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
comfyui-rocm  |     return self._call_impl(*args, **kwargs)
comfyui-rocm  |   File "/usr/lib64/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
comfyui-rocm  |     return forward_call(*args, **kwargs)
comfyui-rocm  |   File "/root/ComfyUI/comfy/ldm/modules/diffusionmodules/model.py", line 635, in forward
comfyui-rocm  |     h = self.up[i_level].block[i_block](h, temb, **kwargs)
comfyui-rocm  |   File "/usr/lib64/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
comfyui-rocm  |     return self._call_impl(*args, **kwargs)
comfyui-rocm  |   File "/usr/lib64/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
comfyui-rocm  |     return forward_call(*args, **kwargs)
comfyui-rocm  |   File "/root/ComfyUI/comfy/ldm/modules/diffusionmodules/model.py", line 150, in forward
comfyui-rocm  |     h = self.conv2(h)
comfyui-rocm  |   File "/usr/lib64/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
comfyui-rocm  |     return self._call_impl(*args, **kwargs)
comfyui-rocm  |   File "/usr/lib64/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
comfyui-rocm  |     return forward_call(*args, **kwargs)
comfyui-rocm  |   File "/root/ComfyUI/comfy/ops.py", line 78, in forward
comfyui-rocm  |     return self.forward_comfy_cast_weights(*args, **kwargs)
comfyui-rocm  |   File "/root/ComfyUI/comfy/ops.py", line 74, in forward_comfy_cast_weights
comfyui-rocm  |     return self._conv_forward(input, weight, bias)
comfyui-rocm  |   File "/usr/lib64/python3.10/site-packages/torch/nn/modules/conv.py", line 456, in _conv_forward
comfyui-rocm  |     return F.conv2d(input, weight, bias, self.stride,
comfyui-rocm  | torch.cuda.OutOfMemoryError: HIP out of memory. Tried to allocate 288.00 MiB. GPU 0 has a total capacty of 512.00 MiB of which 17179869184.00 GiB is free. Of the allocated memory 243.69 MiB is allocated by PyTorch, and 20.31 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_HIP_ALLOC_CONF
comfyui-rocm  | 
comfyui-rocm  | Prompt executed in 91.18 seconds

grigio avatar Jul 12 '24 17:07 grigio