judian17

Results 18 comments of judian17

https://huggingface.co/aha2023/average_MOE_ICEdit-MoE-LoRA/tree/main 可以试试这里面那个icedit-normal_extracted_lora_rank_32-bf16的lora,这是我个人修改的,可以和nunchaku联用,但是结果会更糊一点。算是权宜之计,后续nunchaku官方应该会修 另外,如果使用fill模型的fp8版应该只需要14g左右吧,你试过fp8吗

@River-Zhang I'm glad that my workflow could be helpful. However, today I discovered some potential issues. When loading this LoRA with ComfyUI, there are a lot of warnings saying "lora...

I used AI to analyze the reasoning code of the project, and it seems that the "lora key not loaded" issue comes from the additional four expert layers introduced by...

@YarvixPA This section of code was not generated by AI. I utilized code from [ComfyUI-FluxTrainer](https://github.com/kijai/ComfyUI-FluxTrainer)https://github.com/[kijai/ComfyUI-FluxTrainer](https://github.com/kijai/ComfyUI-FluxTrainer). However, since there were some bugs in that project's code, I modified it by referring...

I apologize for my vague explanation earlier. To be precise, I used AI-generated code to perform a weighted average on the expert layers of the ICEdit Lora. The subsequent model...

@YarvixPA Sorry, I'm just an amateur ComfyUI user who uses AI to analyze code, and I haven't even written any code myself. I'm afraid I can't provide valuable answers 🥲

@IcyIntuition Yes, this is very helpful for improving quality! Thank you for pointing that out.

@YarvixPA yes that what I did

@alexgenovese If you don't plan to use it together with [nunchaku](https://github.com/mit-han-lab/ComfyUI-nunchaku), then there is no need for fusion and re-extraction. I do this solely for compatibility with nunchaku, as it...

@YarvixPA @alexgenovese In addition, this is my modified version of the code, which allows for more flexible weight adjustments. For example, running `python weighted_average_moe_lora.py 5 15 30 50` means fusing...