bnlstm icon indicating copy to clipboard operation
bnlstm copied to clipboard

BN on test set

Open hardmaru opened this issue 9 years ago • 5 comments

Nice blog post!

If you see any performance error I might’ve done, I’d love to know!

One comment: When you evaluate the validation/test set, you should use the saved statistics from training. Looking at the code, I think you are calculating the moments as well during validation/test runs.

hardmaru avatar Aug 29 '16 02:08 hardmaru

Totally true .. I guess the batch size of 100 gives "good enough" statistics for the problem so I forgot to add it in.

Will try to update with a version that stores population statistics and properly uses those at test time.

OlavHN avatar Aug 30 '16 14:08 OlavHN

TF slim has population statistics recurrent batch norm you can check out

I like your implementation style more though since it is elegant and in pure TF

You might also want to play around with the random permutation mnist task since it's only an extra line of code :)

On Tuesday, August 30, 2016, Olav Nymoen [email protected] wrote:

Totally true .. I guess the batch size of 100 gives "good enough" statistics for the problem so I totally forgot about adding it in.

Will try to update with a version that stores population statistics and properly uses those at test time.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/OlavHN/bnlstm/issues/2#issuecomment-243448040, or mute the thread https://github.com/notifications/unsubscribe-auth/AGBoHiDDVeuf_9zbZCYfqN8JSCekBDZNks5qlDeBgaJpZM4JvF4O .

hardmaru avatar Aug 31 '16 00:08 hardmaru

I've tried running with population statistics a bit now with really poor results on sequential mnist. Same results when using slim.batch_norm.

The model seems to be dependent on the batch normalization.

To test I tried using local batch statistics, but increasing the batch from 100 to 1000. That works better than full population statistics, but much worse than batch statistics for a batch of 100.

The graphs in the paper looks very much like mine when using local batch statistics, however they explicitly mention using population statistics for their final results, so I'm not sure what's going on in my code.

OlavHN avatar Sep 03 '16 12:09 OlavHN

I've been trying the same recently, and share similar frustrations as you. I think I found out what's going on and it is not pretty. Basically in my implementation and I think also with slim, the pop statistics is recorded at one layer and assumed to be the same for each time step of the sequence.

But I think in the paper, the actual statistics are recorded separately at each time step. So for MNIST there would be 784 set of statistics. He shows in the paper that all the statistics converge over time for certain tasks (I guess for text given the distribution must be time invariant) but I suspect for MNIST, the statistics over time will not converge...

I also got really good results just using vanilla LSTM but initializing the hidden to hidden layer to the exact identity (not .95 identity)

On Saturday, September 3, 2016, Olav Nymoen [email protected] wrote:

I've tried running with population statistics a bit now with really poor results on sequential mnist. Same results when using slim.batch_norm.

The model seems to be dependent on the batch normalization.

To test I tried using local batch statistics, but increasing the batch from 100 to 1000. That works better than full population statistics, but much worse than batch statistics for a batch of 100.

The graphs in the paper looks very much like mine when using local batch statistics, however they explicitly mention using population statistics for their final results, so I'm not sure what's going on in my code.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/OlavHN/bnlstm/issues/2#issuecomment-244544329, or mute the thread https://github.com/notifications/unsubscribe-auth/AGBoHjnOd-R2vE_7OtGKNNMc2GUnlsg-ks5qmWuSgaJpZM4JvF4O .

hardmaru avatar Sep 03 '16 17:09 hardmaru

@hardmaru Recently I also got worse result in test data set use pop mean and variance. You said you got good result just using vanilla lstm, could you please share you code and tell me what's going on.

I also got really good results just using vanilla LSTM but initializing the hidden to hidden layer to the exact identity (not .95 identity)

liangsun-ponyai avatar Apr 05 '20 15:04 liangsun-ponyai