pi0 multi gps train
if i have multi 4090, how to modify to train pi0?
only 1 4090 just error
same question + 1
same question + 1
sad
Also like to know. Another question is how to freeze siglip and gemma, and only update the para for gemma expert?
Also like to know. 也想知道。 Another question is how to freeze siglip and gemma, and only update the para for gemma expert?另一个问题是如何冻结 siglip 和 gemma,只更新 gemma expert 的段落?
+1 i have same question
+1 not everyone have h100
they seems to freeze already, only train action expert, but it still taks up 29G on GPU.
they seems to freeze already, only train action expert, but it still taks up 29G on GPU.他们似乎已经冻结了,只训练 Action Expert,但它在 GPU 上仍然占用 29G。
sad
Also like to know. Another question is how to freeze siglip and gemma, and only update the para for gemma expert?
Try setting train_expert_only = True, this will enable you to train your model with a single GPU with 24 G.
前辈们成功使用多卡训练了吗?😟
huggingface trainer is easy to use distributed training, just wonder why they do not use it
Hi we update our pi0 implementation and released pi05 support in this PR: https://github.com/huggingface/lerobot/pull/1910 🚀 . If you still have this issue please reopen this thread, but I am closing it for now. We also will introduce multi gpu training with accelerate soon.
前辈们成功使用多卡训练了吗?😟
前辈们成功了嘛