Lasse Espeholt
Lasse Espeholt
within the experimental_run_v2 this shouldn't trigger a synchronization. Only outside experimental_run_v1 "we can't split the devices into inference devices and training devices as with TPU" what do you mean with...
Sorry, it's still not clear to me what you mean with not working. That line should work just fine with multiple GPUs.
Hi, It's true we haven't included Cloud TPU examples in the current code base although most of the code is there (as well as multiple GPUs if useful). The reason...
Hi, there is no private TPU operation. The code for the grpc operations is there and they don't need TPUs to run. However, the VM with the TPU attached doesn't...
Running on TPUs should now be significantly easier with the introduction of Cloud TPU VMs described here: https://cloud.google.com/tpu/docs/users-guide-tpu-vm I would consider making an example if enough interest.
You can recompile the grpc binary with this file: https://github.com/google-research/seed_rl/blob/master/grpc/build.sh Possibly works just by using the nightly custom op docker image but there could be small things that needs tweaking.
I can ping the Cloud TPU VM team, but this error appears to be different. The TF GRPC library appears to be fine. Can you try and list the available...
I'm back from vacation and pinged the 1VM team. I hope to circle back soon, thanks!