Jianguo Zhang
Jianguo Zhang
@muellerzr @sgugger Thanks for the great accelerate library, It is super reliable on 8 cores! Can I know when will accelerate support trainings on TPU vm with more than 8...
@sgugger Thanks very much for your quick update:). We have several colleagues interested in deploying Accelerator on more cores. Looking forward to the future release:)
@Ontopic @sumanthd17 Hi there, please react above message with a 👍🏻 if you want to train models on more than 8 TPU cores in future
> @sherlock42 You can take a look at https://huggingface.co/docs/accelerate/index and especially examples inside https://github.com/huggingface/transformers/tree/main/examples/pytorch. It is pretty simple to run accelerate on TPUs.
@muellerzr There are several likes with interests in the training of Accelerate on TPU vm with more than 8 cores, and I think many people may have the same requests...
Hi, I think the ground truth should be updated here, i.e., the value should be either `after 18:30` or `19:00`.
@budzianowski
Hi, which dataset version is your question targeting ?
Hi Zachary, Thanks for the great update! We are currently trying the new launcher on V3-32. Will give some feelings soon:) Zachary Mueller ***@***.***>于2022年11月15日 周二下午2:01写道: > Hi @jianguoz > >...