lisi31415926
lisi31415926
I'm in the chat group. The author has been pretty busy lately~P.S. Does using your PR require recompilation?
> > I'm in the chat group. The author has been pretty busy lately~P.S. Does using your PR require recompilation? > > not require recompilation How do I install your...
如果CPU Offload 兼容性(ComfyUI 集成)有问题,像8G显存的显卡,是不是就不太适用你的分支?谢谢 If there are compatibility issues with CPU Offload (ComfyUI integration), would graphics cards with 8GB VRAM be less suitable for your branch? Thanks.
Are there any plans to support CUDA 13.0? Thank you.
you need use his workflow
> I think, if I understand it correctly, that the way AI-toolkit works, it is up to the diffusers library to identify the 2511 model as requiring zero_cond_t based on...
> Yes, I replaced that [@6Bf](https://github.com/6Bf)...d24 string in requirements.txt with the latest commit (f6b6a7181eb44f0120b29cd897c129275f366c2a) locally. > > I haven't finished baking my lora yet but the sample gens - both...
> Yes, I replaced that [@6Bf](https://github.com/6Bf)...d24 string in requirements.txt with the latest commit (f6b6a7181eb44f0120b29cd897c129275f366c2a) locally. > > I haven't finished baking my lora yet but the sample gens - both...
https://huggingface.co/Qwen/Qwen-Image-Edit-2511 guide: Install the latest version of diffusers pip install git+https://github.com/huggingface/diffusers
qwen_image_edit_2511_fp8_e4m3fn_scaled_lightning.safetensors | FP8 Quantized | FP8 (e4m3fn scaled) precision, fused with 4-step distilled LoRA, optimized for low-memory deployment it not work on comfyui