StarGAN_v2-Tensorflow icon indicating copy to clipboard operation
StarGAN_v2-Tensorflow copied to clipboard

Distributed training with TensorFlow

Open phongnhhn92 opened this issue 5 years ago • 0 comments

Hi, Thanks for the code, I am just wondering what happened if I use Distributed training with TensorFlow in your project since I am having 2 GPUs. I see in your code that during the training phase, you split your image data into several GPUs and then feed them in a for loop which you iterate through each GPU.

I am just wondering is this the optimal way to do this since I am not really familiar with Tensorflow so by looking a bit I found this Distributed training with TensorFlow .

Therefore, this is not an issue but I guess your training loop can be improved by applying this.

phongnhhn92 avatar Jan 02 '20 16:01 phongnhhn92