deep_learning_and_the_game_of_go icon indicating copy to clipboard operation
deep_learning_and_the_game_of_go copied to clipboard

Chapter 5: can't reproduce "often end up with more than 95% accuracy in less than 10 epochs"

Open jeffhgs opened this issue 5 years ago • 5 comments

Chapter 5 reads:

But it’s noteworthy to observe that you often end up with more than 95% accuracy in less than 10 epochs.

However, I don't share this experience.

I have captured some scenarios in a branch.

https://github.com/jeffhgs/deep_learning_and_the_game_of_go/blob/test_nn_chapter_5/code/dlgo/nn/test_nn.py

You can run as:

(cd code/dlgo/nn && python test_nn.py)

The program downsamples mnist train and test data, train, repeats, and then computes summary statistics.

The program uses the unittest framework for regression testing, and it also emits json documents between scenarios for offline analysis.

The program depends on the incomplete branch I made for #12.

The typical fitting accuracy I get for the full mnist dataset is around 57%.

For purposes of helping your reproduction, I am holding back all inessential changes, including fix for performance related #11.

I tried to run all the configurations listed in the test, but unfortunately my job just ran past the 4h timeout I assigned it, so I'll have to up it and try again. I will attach the 4 instance hours worth of data I have.

This is my configuration:

host: AWS t2.large
OS: ubuntu
Python: 3.6.8

jeffhgs avatar Feb 21 '19 04:02 jeffhgs

Data from a longer run. Note this timed out also, so doesn't end cleanly. But you can clearly see the fit measurements. Each full data set run just under 2h for 10 epochs.

test_nn_jeffhgs_2019-02-20_try3.txt

jeffhgs avatar Feb 23 '19 04:02 jeffhgs

that's pretty strange. last time I evaluated this, I certainly ended up with 95% plus, most of the time. I mean, not that it really matters, the lesson is that the algo learns. But I get your frustration, sorry for the inconvenience.

My suspicion is that I used a different learning rate altogether for my experiments.

maxpumperla avatar Feb 23 '19 09:02 maxpumperla

For me, the network doesn't train most of the time.

image

arisliang avatar Oct 13 '21 09:10 arisliang

Shallower and smaller network architecture, and dense weight initial value scaled with 0.1 worked better for me.

image

arisliang avatar Oct 13 '21 16:10 arisliang