TwinGAN icon indicating copy to clipboard operation
TwinGAN copied to clipboard

Inference model trained on Multiple GPU

Open veya2ztn opened this issue 6 years ago • 1 comments

For multiple GPU train, the saved model ( in .meta file) is split to two clones. So all the tensor names are changed by default like source_ph --> clone_0/ source_ph custom_generated_t_style_source --> clone_0/custom_generated_t_style_source So, if anyone want to eval or inference on its own single GPU machine, please be careful when the pre_trained model is base on multiple GPU. I recommend use this inference code

python inference/image_translation_infer.py \
--model_path="/PATH/TO/CHECHPOINT" \
--image_hw=128 \
--input_tensor_name="clone_0/sources_ph" \
--output_tensor_name="clone_0/custom_generated_t_style_source" \
--input_image_path="/PATH/TO/INPUT" \
--output_image_path="/PATH/TO/OUTPUT" \

And by the way, I am wondering why the inference speed is soooooo slow. Loading the weight take 5-10 second on my 1080Ti

veya2ztn avatar Nov 26 '18 09:11 veya2ztn

Thank you for giving pointers to changes during multi-gpu inference! I think the speed to load weights sounds acceptable given model size and hdd data reading speed. If you want faster inference, you can change the code to do batching.

jerryli27 avatar Nov 26 '18 13:11 jerryli27