keras-applications
keras-applications copied to clipboard
No validation of the listed accuracy values in the README possible
Hi, it is not clear (at least to me) how the numbers in the README are generated. I'm trying to replicate/validate the listed accuracy values for the last few days, but I'm always off by at least a few percent, regardless what pre-processing I'm using. Would it be possible to add the script used to generate these numbers?
Best regards, Drechsler
@udrechsler,
I will share my inference codes in the near future. Key recipes for ImageNet are the following:
- Down-sampling should not be performed as a square. Adjust shorten side as 256 is the right way. In other words, a resized image is either one of (256 X W) or (H X 256).
- The down-sampling scheme is pretty important.
I evaluated all the models with
cv2.resize(img, (h, w), interpolation=cv2.INTER_CUBIC), while the keras default isimg.resize((w, h), pil_image.NEAREST). You can trypil_image.BICUBICor opencv2 like me. - After the down-sampling, you should crop the 224x224 center region from an (256 X W) or (H X 256) image.
I'm struggling to get the same numbers shown in the README as well, I consistently fall 7-8% behind what is shown. I've tried implementing the method @taehoonlee described, but I don't think I fully understand the process as my accuracy dropped further. Have you released the code yet?
I think I've got it working now. What do you do in the cases where an image is smaller than 224x224 to begin with?
@BenTaylor3115, Please just keep the ratio 7/8(=224/256). And as far as I know, there are no examples where the image sizes are smaller than 224 on the official ImageNet results.
I didn't think so. I may still have a problem somewhere. Do you have the code available for the down-sampling / cropping you used to achieve the results in the README? I'm happy to do the debugging myself if I have a reference of the correct way.