YeNet-Pytorch
YeNet-Pytorch copied to clipboard
use_batch_norm
in train_loader, pair_constraint=not(args.use_batch_norm), Why set pair_constraint as False when use batch normalization?
In your implementation, the image don't be normalized to [0.0,1.0]?
Hi,
in train_loader, pair_constraint=not(args.use_batch_norm), Why set pair_constraint as False when use batch normalization?
First of all the original publication doesn't use batch normalization. If you want to use it, you have to avoid to use the cover + stego of the same cover within the same batch, otherwise the network will try to use the average + variance to cheat. Basically you can't use batch normalization if you have dependencies within the batch.
In you implementation, the image don't be normalized to [0.0, 1.0] ?
if you look at the preprocessing part, it's made for [0, 255] images (the output will naturally be very close to a [-1. 1.] distribution).