ComfyUI-ToonCrafter icon indicating copy to clipboard operation
ComfyUI-ToonCrafter copied to clipboard

4090 out of memory

Open Kuvshin8 opened this issue 1 year ago • 3 comments

!!! Exception during processing!!! Allocation on device 0 would exceed allowed memory. (out of memory) Currently allocated : 19.13 GiB Requested : 2.50 GiB Device limit : 23.64 GiB Free (according to CUDA): 8.81 MiB PyTorch limit (set by user-supplied memory fraction) : 17179869184.00 GiB Traceback (most recent call last): File "/workspace/ComfyUI/execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/workspace/ComfyUI/execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/workspace/ComfyUI/execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "/workspace/ComfyUI/custom_nodes/ComfyUI-ToonCrafter/init.py", line 157, in get_image batch_samples = batch_ddim_sampling(model, cond, noise_shape, n_samples=1, ddim_steps=steps, ddim_eta=eta, cfg_scale=cfg_scale, hs=hs) File "/workspace/ComfyUI/custom_nodes/ComfyUI-ToonCrafter/ToonCrafter/scripts/evaluation/funcs.py", line 79, in batch_ddim_sampling batch_images = model.decode_first_stage(samples, **additional_decode_kwargs) File "/workspace/ComfyUI/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/workspace/ComfyUI/custom_nodes/ComfyUI-ToonCrafter/ToonCrafter/lvdm/models/ddpm3d.py", line 683, in decode_first_stage return self.decode_core(z, **kwargs) File "/workspace/ComfyUI/custom_nodes/ComfyUI-ToonCrafter/ToonCrafter/lvdm/models/ddpm3d.py", line 671, in decode_core out = self.first_stage_model.decode( File "/workspace/ComfyUI/custom_nodes/ComfyUI-ToonCrafter/ToonCrafter/lvdm/models/autoencoder.py", line 119, in decode dec = self.decoder(z, **kwargs) # change for SVD decoder by adding **kwargs File "/workspace/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/workspace/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) File "/workspace/ComfyUI/custom_nodes/ComfyUI-ToonCrafter/ToonCrafter/lvdm/models/autoencoder_dualref.py", line 510, in forward h = self.up[i_level].block[i_block](h, temb, **kwargs) File "/workspace/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/workspace/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) File "/workspace/ComfyUI/custom_nodes/ComfyUI-ToonCrafter/ToonCrafter/lvdm/models/autoencoder_dualref.py", line 901, in forward x = super().forward(x, temb) File "/workspace/ComfyUI/custom_nodes/ComfyUI-ToonCrafter/ToonCrafter/lvdm/models/autoencoder_dualref.py", line 78, in forward h = nonlinearity(h) File "/workspace/ComfyUI/custom_nodes/ComfyUI-ToonCrafter/ToonCrafter/lvdm/models/autoencoder_dualref.py", line 29, in nonlinearity return x * torch.sigmoid(x) torch.cuda.OutOfMemoryError: Allocation on device 0 would exceed allowed memory. (out of memory) Currently allocated : 19.13 GiB Requested : 2.50 GiB Device limit : 23.64 GiB Free (according to CUDA): 8.81 MiB PyTorch limit (set by user-supplied memory fraction) : 17179869184.00 GiB

Prompt executed in 11.75 seconds

Kuvshin8 avatar Jun 01 '24 16:06 Kuvshin8

with --lowvram - the same

Kuvshin8 avatar Jun 01 '24 16:06 Kuvshin8

There's been some discussion here about hardcoding half precision in ToonCrafter's code to make it take <24GB VRAM.

user-vm avatar Jun 01 '24 23:06 user-vm

a model selector was implemented and the readme was updated with a link to half precision weights which should take care of the issues

FizzleDorf avatar Jun 02 '24 18:06 FizzleDorf