chainer-partial_convolution_image_inpainting
chainer-partial_convolution_image_inpainting copied to clipboard
Difference with original implementation
Hi Seitaro, thanks for your interest and efforts in re-producing our work. I noticed there are some difference with our implementation. I just created a Q&A section here: http://masc.cs.gmu.edu/wiki/partialconv
Thanks a lot. Guilin Liu
Hi @liuguilin1225, thanks for your kind mention! I will check it. Seitaro
I added a statement to README, "Difference from original paper."
According to http://masc.cs.gmu.edu/wiki/partialconv, I fix my implementation. I added bias calculation in partial convolution layer (C(0) in FAQ)
@liuguilin1225 are you the author of this paper? I saw the name same as your's ~~~///(^v^)\~~~
@sunkyya Yes. I am the author, Guilin Liu.
@liuguilin1225 I can't reach http://masc.cs.gmu.edu/wiki/partialconv,I have tried it many times ,can you reach this web?
@jeejeeli The server was down for several days due to Internet issues. It is back now. http://masc.cs.gmu.edu/wiki/partialconv
@liuguilin1225 thanks ~~~
Hi @liuguilin1225 , do you plan to open source code?
@xuanzhangyang currently our company has some other plans of it. We will release the code later.
@liuguilin1225 ,i have some problems when i was fine tuning due to your pconv paper.In the paper,you said that freeze bn in the encoder layer and keep bn in the decoder layer. I did what you said,but when i was doing this, my loss just became very high, did you ever met this problem?should i just wait until the loss becoming fine? I used batchnormalization from tensorflow.contirb.slim, set "trainabel" and "is_training" to false to fine tune.
@sunkyya I have little experience in tensorflow. For my experiments, the difference is between enabling BN and freezing BN in encoder is not too big. The reason I did that is more about ensuring the central idea of BN, computing mean and variable, only depends on the valid pixels. However, ideally if you don't do such fine-tuning, the results would also be reasonable. So sometimes, this fine-tuning can be eliminated. Also note that I enable the bias for each partial conv layer; generally many people would disable the bias in conv when their network is using Conv-BN-ReLU groups as the BN already has bias. This also means that if your partial conv doesn't have bias, you should NOT disable the BN; otherwise, the network is always just like a simple multiplication.
Cheers Guilin Liu NVIDIA https://liuguilin1225.github.io/
@liuguilin1225 thank you!!! The bias may be my problem,I will try to add bias and do fine tuning again. Thank you!!!
@sunkyya ideally if you use PartialConv-BN-ReLU and don't do fine-tuning, it should also work.
@liuguilin1225 If I have a real broken image, how can I generate the input mask of the broken image by extracting the part of broken pixels ? Thanks for your help and looking forward to your reply.
@liuguilin1225 Another question: In your paper, only images generated by original images and masks are used?