Add TPU support
I feel like this project should have a way to use a TPU instead of a GPU.
One usage example would be on Google Colab.
add a pull request!
Ermmmmm. Running TensorFlow under a TPU is easy. One of the problems is that the python script checks for CUDA, which is not used under a TPU environment.
https://github.com/pytorch/xla
How much do you think would the speed up be @NickAcPT? I think it would be complicated to reshape the entire code and I think a big if-clause would be a possible but not very good solution. What are your ideas so far?
How much do you think would the speed up be @NickAcPT? I think it would be complicated to reshape the entire code and I think a big if-clause would be a possible but not very good solution. What are your ideas so far?
Well, I don't know exactly. I'm all new to this, but according to what I've read, it should be faster than a GPU since it's specialized hardware for ML. Well, if this can't be added, no problem. A GPU still works fine.
I am pretty new to this, as well. I think that you could check at the beginning if there is a GPU available and set a variable accordingly. The problem would be that most functions in the code are ending with .cuda(). So you would have to implement either an if-switch at every place in the code where this happens, but I think this could impact performance. And the massive extent of this would be the reason he wants you to create a pull request.
the best route to go would be to integrate pytorch-lightning (although I am uncertain of support for two optimizers) as it has TPU support now https://pytorch-lightning.readthedocs.io/en/latest/tpu.html
Porting this to PyTorch lightning would be great. It has support for multiple optimizers (reference) and training on TPU is very simple (reference).
@lucidrains in the readme, there is the offhand comment that the smallest AWS p2 instance is "slower than Colab". How did you find this out?
EDIT: NVM, just realized you have to select "GPU" in settings!