Pytorch-Deeplab
Pytorch-Deeplab copied to clipboard
The normalization in data pre-processing
I notice that the normalization of images in your code uses "image - mean," rather than the normal "(image - mean)/std". Is there a specific reason for using this kind of normalization?
For the scale of pixel values, your code is in the scale of [-128,128], rather than [-0.5,0.5] that are used in other works. Which one is better?
Also, if I use only the ImageNet pre-trained model, which kind of image normalization and pixel scale should I use?
I have figured the scale problem in data preprocessing. But I still do not know why the model do not use std in the normalization? Any help will be appreciated!
Hi @yzou2, do you mind explaining why the range of [-128, 128] is used and not [-0.5, 0.5]?