segdec-net-jim2019
segdec-net-jim2019 copied to clipboard
Data Preprocessing
Thanks for your great job! I have a question about preprocessing of the raw data. Did you do some preprocess to the raw data such as sub mean, divide std or Normalize data into [0, 1]
Hi, the image values are only translated into range [0, 1] using tf.image.convert_image_dtype(image, dtype=tf.float32)
, but are not normalized with a per-image mean and std values.
I got it, thanks a lot.
Hello, I have another question. In your implementation, I found that you use batchnorm to do the feature normalization, also learn the parameter scaler(alpha) and offset(beta) of batchnorm. In evaluation mode, you didn't change the mode of batchnorm to the eval mode which would use running mean and running var to do the batchnorm. In my opinion, I think you just want to do the feature normalization during the inference as well, but does the learned parameter scaler and offset cause effect to the feature normalization?
Hi. Yes, we effectively do not want to do true batch normalization due to the limitation of batch_size=1. Instead, we did normalization of features to zero-mean and unit scale (of current sample, not using moving average values), plus learnable scalar (alpha) and offset (beta). We effectively implemented this using the training mode of the batch-norm, which when using batch_size=1 will always return features normalized to zero-mean and unit-scale of the current batch (in our case batch of only one sample) and will never use trained moving averages for normalization.
The learnable scalar and offset should not do much harm since by default they will be initialized to alpha=1 and beta=0, and if they will be useful for solving the problem, they will move to other values.
Best, Domen
Hello, Domen. Thanks for your patient explanation, I got your points!