paraphrase-id-tensorflow
paraphrase-id-tensorflow copied to clipboard
Add MultiGPU (Data Parallelism)
Looks like the train time is pretty long on AWS instances with K80s. Adding MultiGPU data parallelism would be a good way to mitigate this (as done in https://www.tensorflow.org/tutorials/using_gpu#using_multiple_gpus)
Does current version support multiple GPU ? I find this training time is pretty long on my training dataset which has almost 600,000 sentence pairs.
Unfortunately not, I haven’t been very good about maintaining this code since I switched to using PyTorch shortly after this. sorry!