openpi
openpi copied to clipboard
Lora finetuning or freezing weights for training Pytorch models?
Wonderful work!
I noticed that assigning paligemma_variant="gemma_2b_lora" and action_expert_variant="gemma_300m_lora" in TrainConfig.model() config and TrainConfig.freeze_filter() would only take effects in jax training.
So is there any convenient way to add lora finetuning or freeze certain weight (like freeze siglip) in pytroch training?
Any advice will help. Thank you!