LGM
LGM copied to clipboard
Is there any current way to run inference on this model with a LoRA?
I didn't know if there was support for something for this yet, but even if there wasn't I was curious as to know what might be needed to add that level of modularity to let users try from all the different LoRAs available with this during inference?
@x-CK-x Hi, since our model is essentially a U-Net with attention like stable diffusion, it's possible to add LoRA. But it needs experiments to decide which layers to apply, and what kind of training is needed. Also, since the model itself is not really big, maybe you can first try to directly finetune the whole model.