evolution-strategies-starter icon indicating copy to clipboard operation
evolution-strategies-starter copied to clipboard

Virtual batch Normalization

Open sahiliitm opened this issue 7 years ago • 3 comments

If I understand the code correctly, it uses virtual batch normalization only for the inputs and not for the intermediate layers.

Was this done in the Atari context for getting the results stated in the paper?

Also, what was the network architecture used for the Atari domain?

sahiliitm avatar May 10 '17 05:05 sahiliitm

@sahiliitm Hello, in the paper, it is stated " We used the same preprocessing and feedforward CNN architecture used by (Mnih et al., 2016)". So it should be the traditional two layers FF.

joyousrabbit avatar May 10 '17 08:05 joyousrabbit

Hi @sahiliitm

Could you point me to the code where you see virtual batch normalisation implementation? I thought it is:

@property
def needs_ref_batch(self):
     return False

Which is currently not implemented

PatrykChrabaszcz avatar May 18 '17 09:05 PatrykChrabaszcz

In the code here we have just z-normalization for the inputs, no virtual batch norm. Also no hyperparameters for Atari. OpenAI, please be more reproducible! :-)

louiskirsch avatar Apr 17 '20 06:04 louiskirsch