Support LoRA from NewBie LoRA trainer
This kind of LoRAs come from NewBie LoRA trainer, and there is the prefix base_model.model. in their keys. This may also happen for other training scripts that directly export the LoRA from PEFT. The official NewBie/Lumina2/Z-Image models do not have this prefix, so I implemented the mapping.
An example of the LoRA is https://civitai.com/models/2201563
@woct0rdho Checked the layer syntax for the LoRA. Never seen a syntax as that but judging from fused qkv and the Lora_A, Lora_B suffixes I think the easiest method rather than having ComfyUI adding even more naming to keep track of, is to request the NewBie trainer to use more conventional name for the layers. In this case it is the prefix base_model.model that would be replaced by diffusion_model. You can try this one with the model and see if it loads.
https://huggingface.co/silveroxides/HF-Upload-Test/blob/main/datou-comfy.safetensors
This happens when the LoRA is defined using PEFT and saved using save_pretrained, because PEFT puts the base model in base_model.model . It's not that uncommon and we already have code to remove base_model.model. for SD3, PixArt, and HunyuanDiT in this file (which are not the most popular architctures and did not have day 0 support in AI Toolkit or sd-scripts so they had their custom trainers).
But of course, if we don't merge this in ComfyUI, we can change this in NewBie LoRA trainer.
Yeah ofc. The layer name syntaxes across the various models is such chaos and every new one that gets introduced (or old one reintroduced) tend to break something. I ususally throw rename_keys.py scripts in discord servers now and again to allow users to fix it quickly themselves after already saving model. AI-Toolkit's LoKR was such a thing. Unrelated note: I am quantizing your Gemma3_4B for ComfyUI currently. Should I throw you PR for it. It is using the comfy_quant layer config that is compatible with ComfyUI.
(Unrelated note) There's already a fp8 version of Gemma3 4B at https://civitai.com/models/2217313?modelVersionId=2507703 . If I understand correctly it's already using the comfy_quant layer config (although they mentioned calibration and there are different choices of calibration).
Eeeh wut? What the hell. I have an entire python module dedicated to just that. Calibrating dequant scale using learned rounding and calibration samples for bias correction.
This happens when the LoRA is defined using PEFT and saved using
save_pretrained, because PEFT puts the base model inbase_model.model. It's not that uncommon and we already have code to removebase_model.model.for SD3, PixArt, and HunyuanDiT in this file (which are not the most popular architctures and did not have day 0 support in AI Toolkit or sd-scripts so they had their custom trainers).
you have to run the converter method before saving, like the Diffusers examples of training with PEFT show. this is a bug in the NewBieAI LoRA trainer.
Since lora.py already acts as a compatibility layer for various models, and this issue stems from standard PEFT saving, adding a fix here seems like the most pragmatic approach. It extends the same convenience to NewBie users and saves them from unnecessary troubleshooting.