nunchaku icon indicating copy to clipboard operation
nunchaku copied to clipboard

Can svdquant comfyui be used with flux pulid?

Open pyoliu opened this issue 11 months ago • 9 comments

model_type FLUX [2024-12-25 14:49:43.765] [info] Loading partial weights from /home/ubuntu/.cache/huggingface/hub/models--mit-han-lab--svdquant-models/snapshots/0684a6e2693230e1aa0c54827a90e64bf082c95d/svdq-flux.1-dev-lora-realism.safetensors [2024-12-25 14:49:43.765] [warning] Unable to pin memory: operation not supported [2024-12-25 14:49:43.765] [info] Try MAP_PRIVATE [2024-12-25 14:49:43.861] [info] Done. [2024-12-25 14:49:43.871] [info] Set lora scale to 1 (skip 32 ranks) clip missing: ['text_projection.weight'] Requested to load FluxClipModel_ loaded completely 9.5367431640625e+25 9319.23095703125 True Requested to load Flux loaded completely 9.5367431640625e+25 922.6993408203125 True 0%| | 0/25 [00:00<?, ?it/s]Passing txt_ids 3d torch.Tensor is deprecated.Please remove the batch dimension and pass it as a 2d torch Tensor Passing img_ids 3d torch.Tensor is deprecated.Please remove the batch dimension and pass it as a 2d torch Tensor 0%| | 0/25 [00:00<?, ?it/s] !!! Exception during processing !!! Traceback (most recent call last): File "/home/ubuntu/16T/lsm/ComfyUI/execution.py", line 328, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "/home/ubuntu/16T/lsm/ComfyUI/execution.py", line 203, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "/home/ubuntu/16T/lsm/ComfyUI/execution.py", line 174, in _map_node_over_list process_inputs(input_dict, i) File "/home/ubuntu/16T/lsm/ComfyUI/execution.py", line 163, in process_inputs results.append(getattr(obj, func)(**inputs)) File "/home/ubuntu/16T/lsm/ComfyUI/comfy_extras/nodes_custom_sampler.py", line 633, in sample samples = guider.sample(noise.generate_noise(latent), latent_image, sampler, sigmas, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise.seed) File "/home/ubuntu/16T/lsm/ComfyUI/comfy/samplers.py", line 897, in sample output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) File "/home/ubuntu/16T/lsm/ComfyUI/comfy/patcher_extension.py", line 110, in execute return self.original(*args, **kwargs) File "/home/ubuntu/16T/lsm/ComfyUI/comfy/samplers.py", line 866, in outer_sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) File "/home/ubuntu/16T/lsm/ComfyUI/comfy/samplers.py", line 850, in inner_sample samples = executor.execute(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) File "/home/ubuntu/16T/lsm/ComfyUI/comfy/patcher_extension.py", line 110, in execute return self.original(*args, **kwargs) File "/home/ubuntu/16T/lsm/ComfyUI/comfy/samplers.py", line 707, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options) File "/home/ubuntu/miniconda3/envs/flux/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) File "/home/ubuntu/16T/lsm/ComfyUI/comfy/k_diffusion/sampling.py", line 1098, in sample_deis denoised = model(x_cur, t_cur * s_in, **extra_args) File "/home/ubuntu/16T/lsm/ComfyUI/comfy/samplers.py", line 379, in call out = self.inner_model(x, sigma, model_options=model_options, seed=seed) File "/home/ubuntu/16T/lsm/ComfyUI/comfy/samplers.py", line 832, in call return self.predict_noise(*args, **kwargs) File "/home/ubuntu/16T/lsm/ComfyUI/comfy/samplers.py", line 835, in predict_noise return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed) File "/home/ubuntu/16T/lsm/ComfyUI/comfy/samplers.py", line 359, in sampling_function out = calc_cond_batch(model, conds, x, timestep, model_options) File "/home/ubuntu/16T/lsm/ComfyUI/comfy/samplers.py", line 195, in calc_cond_batch return executor.execute(model, conds, x_in, timestep, model_options) File "/home/ubuntu/16T/lsm/ComfyUI/comfy/patcher_extension.py", line 110, in execute return self.original(*args, **kwargs) File "/home/ubuntu/16T/lsm/ComfyUI/comfy/samplers.py", line 308, in calc_cond_batch output = model.apply_model(input_x, timestep, **c).chunk(batch_chunks) File "/home/ubuntu/16T/lsm/ComfyUI/comfy/model_base.py", line 130, in apply_model return comfy.patcher_extension.WrapperExecutor.new_class_executor( File "/home/ubuntu/16T/lsm/ComfyUI/comfy/patcher_extension.py", line 110, in execute return self.original(*args, **kwargs) File "/home/ubuntu/16T/lsm/ComfyUI/comfy/model_base.py", line 159, in _apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float() File "/home/ubuntu/miniconda3/envs/flux/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/ubuntu/miniconda3/envs/flux/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) File "/home/ubuntu/16T/lsm/ComfyUI/custom_nodes/svdquant/nodes.py", line 47, in forward out = self.model( File "/home/ubuntu/miniconda3/envs/flux/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/ubuntu/miniconda3/envs/flux/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) File "/home/ubuntu/miniconda3/envs/flux/lib/python3.10/site-packages/diffusers/models/transformers/transformer_flux.py", line 522, in forward encoder_hidden_states, hidden_states = block( File "/home/ubuntu/miniconda3/envs/flux/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/ubuntu/miniconda3/envs/flux/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) File "/home/ubuntu/miniconda3/envs/flux/lib/python3.10/site-packages/nunchaku-0.0.2b0-py3.10-linux-x86_64.egg/nunchaku/models/transformer_flux.py", line 49, in forward assert image_rotary_emb.shape[2] == batch_size * (txt_tokens + img_tokens) AssertionError

Prompt executed in 44.93 seconds

pyoliu avatar Dec 25 '24 06:12 pyoliu

Currently, this feature is not supported in ComfyUI. We will consider adding it if there is significant demand.

lmxyy avatar Jan 11 '25 05:01 lmxyy

Currently, this feature is not supported in ComfyUI. We will consider adding it if there is significant demand.

+1

kaylio avatar Mar 12 '25 08:03 kaylio

Hope to increase pulid!!

chenbaiyujason avatar Mar 12 '25 08:03 chenbaiyujason

Hope to increase pulid!!

wuxxd avatar Mar 23 '25 06:03 wuxxd

please implement this!

jeje3869 avatar Apr 03 '25 00:04 jeje3869

hope to add this feature!

BbChip0103 avatar Apr 06 '25 05:04 BbChip0103

Hi, does comfyui support this now?

ita9naiwa avatar Apr 06 '25 08:04 ita9naiwa

Hi, can PuLID be used with svdquant (Nunckaku) in ComfyUI please? Thanks to let us know.

OCOMATA avatar Apr 07 '25 22:04 OCOMATA

It is in our roadmap for April and is already in progress.

lmxyy avatar Apr 08 '25 00:04 lmxyy

I want Pulid. Currently using it in GGUF. Would love to see what fp4 svdq can do. RTX5070 user.

MiDonGo64 avatar Apr 29 '25 19:04 MiDonGo64

#274 is working on this.

lmxyy avatar Apr 29 '25 19:04 lmxyy

#274 and mit-han-lab/ComfyUI-nunchaku#106 have been merged, thanks to @KBRASK . PuLID was supported. Will continue improving it. I will close the issue. If you have any other issues regarding PuLID, feel free to open a new issue.

lmxyy avatar May 20 '25 02:05 lmxyy