Zero-DCE
Zero-DCE copied to clipboard
the size of input images
hi, thanks for your interesting work. And I have 3 problems:
- 1, The paper says that the author resizes the training images into 512x512 (section 4), but the code resizes the images into 256x256 (dataloader.py). Why?
- 2, According to the code, the net is trained by images whose sizes are unified 256x256, but tested by the original size of testing images. In other words, the image size of training set and testing set are different. Could you tell me the reasons?
- 3, What image size does your pretrained model updated in the snapshots folder used? 512x512 or 256x256 ?
thanks.
Hi,
- It doesn't matter whether using 256256 or 512512 for training. Our method is non-reference-based learning. So, you can achieve similar results.
- It is common to use different sizes for training and testing if you use a full convolution network.
- we used 512*512 to train our model.
Hi,
- It doesn't matter whether using 256_256 or 512_512 for training. Our method is non-reference-based learning. So, you can achieve similar results.
- It is common to use different sizes for training and testing if you use a full convolution network.
- we used 512*512 to train our model.
thanks for your rapidly repsponse.