Pytorch-Attention-Guided-CycleGAN
Pytorch-Attention-Guided-CycleGAN copied to clipboard
Implemenetation and paper differences
Very clean code, however I have found what I believe are differences between the paper and the code implementation in the model structure. Could you please share why these differences came into existence?
- According to Apendix A the last layer of generator should be c3s1-3-T, but rather c7s1-3-T is used in the code.
- The second up-scaling layer in the attention network is commented out (and having this in would mean the following conv should have stride 2?)
- The resblocks do not seem to relu the output and while the paper does not mention anything (just says use resblock), from what I know about them, the (out+x) should be passed through relu?
- Missing the s′new part of equation num 6?