context_encoder_pytorch
context_encoder_pytorch copied to clipboard
Pretrained model's result on test images
Hi! I notice that the pretrained model's result on test images is about 15.79 percent for L1 loss while about 5.31 percent for L2 loss. However, the result reported in paper is about 9.37% for L1 Loss while about 1.96 for L2 loss. Could you provide some possible reasons for the difference? Thanks!