ComfyUI-layerdiffuse
ComfyUI-layerdiffuse copied to clipboard
SD15 models supported
LayerDiffuse SD15 brings 4 new models:
Generate FG
Use the apply node the same way as before.
Generate FG + Blended given BG
Need batch size = 2N.
Generate BG + Blended given FG
Need batch size = 2N.
Generate BG + FG + Blended together
Need batch size = 3N.
Notes:
- Dropdown options in previous nodes are updated. You will need to re-select the config for your previous workflows
- Unlike forge impl, which does cond concat for fg/bg/blended, in ComfyUI impl, the cond passed to layer diffusion node directly overwrites the global cond.
LayerDiffuseDecode (Split)is added to decode RGBA every N images. It serves the purpose to only decode FG images in a batch.
SD1.5, Error occurred when executing KSampler:
'UNetModel' object has no attribute 'default_image_only_indicator'
SD1.5
Error occurred when executing KSampler:
CUDA error: invalid configuration argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
File "/opt/ComfyUI/execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/opt/ComfyUI/execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/opt/ComfyUI/execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/opt/ComfyUI/nodes.py", line 1368, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
File "/opt/ComfyUI/nodes.py", line 1338, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "/opt/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/modules/impact/sample_error_enhancer.py", line 22, in informative_sample
raise e
File "/opt/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/modules/impact/sample_error_enhancer.py", line 9, in informative_sample
return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
File "/opt/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sampling.py", line 248, in motion_sample
return orig_comfy_sample(model, noise, *args, **kwargs)
File "/opt/ComfyUI/comfy/sample.py", line 100, in sample
samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "/opt/ComfyUI/comfy/samplers.py", line 703, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "/opt/ComfyUI/comfy/samplers.py", line 608, in sample
samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
File "/opt/ComfyUI/comfy/samplers.py", line 547, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/opt/ComfyUI/comfy/k_diffusion/sampling.py", line 137, in sample_euler
denoised = model(x, sigma_hat * s_in, **extra_args)
File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/ComfyUI/comfy/samplers.py", line 285, in forward
out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, model_options=model_options, seed=seed)
File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/ComfyUI/comfy/samplers.py", line 272, in forward
return self.apply_model(*args, **kwargs)
File "/opt/ComfyUI/comfy/samplers.py", line 269, in apply_model
out = sampling_function(self.inner_model, x, timestep, uncond, cond, cond_scale, model_options=model_options, seed=seed)
File "/opt/ComfyUI/comfy/samplers.py", line 249, in sampling_function
cond_pred, uncond_pred = calc_cond_uncond_batch(model, cond, uncond_, x, timestep, model_options)
File "/opt/ComfyUI/comfy/samplers.py", line 223, in calc_cond_uncond_batch
output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
File "/opt/ComfyUI/comfy/model_base.py", line 96, in apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 849, in forward
h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator)
File "/opt/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 43, in forward_timestep_embed
x = layer(x, context, transformer_options)
File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/ComfyUI/comfy/ldm/modules/attention.py", line 632, in forward
x = block(x, context=context[i], transformer_options=transformer_options)
File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/ComfyUI/custom_nodes/ComfyUI-layerdiffuse/lib_layerdiffusion/attention_sharing.py", line 253, in forward
return func(self, x, context, transformer_options)
File "/opt/ComfyUI/comfy/ldm/modules/attention.py", line 459, in forward
return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint)
File "/opt/ComfyUI/comfy/ldm/modules/diffusionmodules/util.py", line 191, in checkpoint
return func(*inputs)
File "/opt/ComfyUI/comfy/ldm/modules/attention.py", line 519, in _forward
n = self.attn1(n, context=context_attn1, value=value_attn1)
File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/ComfyUI/custom_nodes/ComfyUI-layerdiffuse/lib_layerdiffusion/attention_sharing.py", line 239, in forward
x = optimized_attention(q, k, v, self.heads)
File "/opt/ComfyUI/comfy/ldm/modules/attention.py", line 326, in attention_xformers
out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=mask)
File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/xformers/ops/fmha/__init__.py", line 223, in memory_efficient_attention
return _memory_efficient_attention(
File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/xformers/ops/fmha/__init__.py", line 321, in _memory_efficient_attention
return _memory_efficient_attention_forward(
File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/xformers/ops/fmha/__init__.py", line 341, in _memory_efficient_attention_forward
out, *_ = op.apply(inp, needs_gradient=False)
File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/xformers/ops/fmha/flash.py", line 458, in apply
out, softmax_lse, rng_state = cls.OPERATOR(
File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/torch/_ops.py", line 755, in __call__
return self._op(*args, **(kwargs or {}))
File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/xformers/ops/fmha/flash.py", line 106, in _flash_fwd
) = _C_flashattention.fwd(
Why are the Generation times slower for SD1.5 over SDXL? SDXL checkpoint seems almost twice as fast(3s vs 6s). Am i missing something?
SD15 are all attn sharing one step model. They are not directly comparable with previous SDXL models.
Why are the Generation times slower for SD1.5 over SDXL? SDXL checkpoint seems almost twice as fast(3s vs 6s). Am i missing something?
采样器那边出现4096报错, 是图片大小不匹配 你把原始图片缩放到 512x512, 然后再传给下一个节点
采样器那边出现4096报错, 是图片大小不匹配 你把原始图片缩放到 512x512, 然后再传给下一个节点
您的意思是说,扣完图的图片,要和空latent的大小匹配,对嘛?
SD15 are all attn sharing one step model. They are not directly comparable with previous SDXL models.
Why are the Generation times slower for SD1.5 over SDXL? SDXL checkpoint seems almost twice as fast(3s vs 6s). Am i missing something?
Can you share any HF links for attn sharing SD1.5?
SD15 are all attn sharing one step model. They are not directly comparable with previous SDXL models.
Why are the Generation times slower for SD1.5 over SDXL? SDXL checkpoint seems almost twice as fast(3s vs 6s). Am i missing something?
Can you share any HF links for attn sharing SD1.5?
You can find all models here: https://huggingface.co/LayerDiffusion/layerdiffusion-v1/tree/main
Hi. Thank you for creating this wonderful plugin for ComfyUI. Will it be possible to use it with FLUX?