stylegan-reproduced icon indicating copy to clipboard operation
stylegan-reproduced copied to clipboard

possible for distribute training

Open zhangqijun opened this issue 5 years ago • 4 comments

I have 3 machine which each have one 2080ti。Do you have some suggestions about training method,dataset loader (from tfrecords)?

zhangqijun avatar Sep 16 '19 10:09 zhangqijun

I'm not expert in distributed training system. But I've tried (tested) some distributed training before so...

You could try some other strategies in tf.distribute.experimental besides tf.distribute.MirroredStrategy. But I recommend to put your GPUs in one machine and use tf.distribute.MirroredStrategy. Because other strategy like ParameterServerStrategy, it is a bit difficult to set up then MirroredStrategy. And the environment that how multiple machines are organized (like network settings) affects the performance.

If you want to try, follow the guide in old contrib readme. See Multi-worker Training section.

moono avatar Sep 17 '19 00:09 moono

First of all,thank you for your help. After I change

train.py line74

from distribution = tf.contrib.distribute.MirroredStrategy() to distribution = tf.contrib.distribute.CollectiveAllReduceStrategy(num_gpus_per_worker=1). And add

import json
os.environ["TF_CONFIG"] = json.dumps({
  "cluster": {
   "worker": ["192.168.108.15:2222","192.168.108.11:2222"],
  },
  "task": {"type": "worker", "index": 0}
})

in the code begining. But seems each machine start server with "localhost":2222, and can not communication with each other.I'm tring to fix this problem,also tried Standalone client mode use tf.contrib.distribute.run_standard_tensorflow_server().join() also has same problem. Any thoughts about what I'm missing?

img

zhangqijun avatar Sep 24 '19 10:09 zhangqijun

Stupid Me,I find I use global system proxy.

zhangqijun avatar Sep 24 '19 13:09 zhangqijun

if I use my own dataset,should I put the face/ in datasets/ ??? datasets/face/**.jpg ??

ucasiggcas avatar Dec 08 '19 14:12 ucasiggcas