openpi icon indicating copy to clipboard operation
openpi copied to clipboard

Lora finetuning or freezing weights for training Pytorch models?

Open PuzhenYuan opened this issue 6 days ago • 0 comments

Wonderful work!

I noticed that assigning paligemma_variant="gemma_2b_lora" and action_expert_variant="gemma_300m_lora" in TrainConfig.model() config and TrainConfig.freeze_filter() would only take effects in jax training.

So is there any convenient way to add lora finetuning or freeze certain weight (like freeze siglip) in pytroch training?

Any advice will help. Thank you!

PuzhenYuan avatar Nov 28 '25 06:11 PuzhenYuan