Shift-Net
Shift-Net copied to clipboard
About multi-GPU training
How can I train the dataset on multi-gpu? and How do the code load the dataset ? Should I divide the damaged images and the right images into different folders making the same name of two images? Thank you for your reply.
For the first question, #6 may help you.
For 2nd question, all ground-truth images should be placed in one folder. That's all ! You should set mask_type=random
in training. When testing, you need figure out the mask
corresponding to the damaged images. It is because, for general inpainting, it's impossible to get the masked region directly from the damaged images. Blind-inpainting masks sense only for specific scenes, scratches in old images, fences removal, etc.
Finally, I also recommend you try our pytorch version. It has been able to handle multiGPU training.
I am training pytorch model these days, hoping a good model in moths.
Appreciate for your reply. For your answer of the 2nd question , in the other word, it will not need damaged images but noly for the intact images, when training the model. Is it right? and I want to restore some images with random line similar with fences, but it is difficult to get the mask of the line . What should i do if i want use Shift-net to solve the problem? Thanks for your reply.
As the mask of line is quite thin. It is not quite necessary for gan training. As shift operation shift pixels from known region to missing region, it is not quite possible to adopt shift if mask is not available. Or you may use an additional network to estimate the mask, then perform shift. What's your opinion?