Recipes
Recipes copied to clipboard
Highway Networks demo
Here's a revamped version of the notebook I did of "Highway Networks" by Srivastava et al. (2015): http://arxiv.org/abs/1505.00387
Nice and well-explained!
For now, I thought that "paper" Recipes should aim to faithfully reproduce results from a paper, i.e., use the same hyperparameters and achieve the same results. That's probably not always feasible, though. Still, to get a bit closer to the paper, you could follow the hints in Section 4: Transform gate biases were initialized to -2 for MNIST, not -4, and throughout the paper they use 50 hidden units, not 40. It would be cool to try reproducing Figure 2, or at least to add a final comment that you didn't try that but we'd welcome pull requests doing so.
I don't have much time to work on this right now (beyond cosmetic improvements) so I'm okay with moving it to 'examples' as well if you prefer.
I'm okay with moving it to 'examples' as well if you prefer.
I still think it fits in "papers" with the perspective of it reproducing Figure 2 sometime. Don't worry if you can't add a comment in the end, we can also add an Issue for that.
@benanne I'm going to ahead and merge this and #13 if you don't mind. I can make a PR later to address f0k's comments if you don't have time.
I was actually planning to have a look at it today. But I can do a new PR after you merge them as well, up to you.
If you're working on them further that's great, let's wait till you're done. I just didn't want them to be stuck in limbo too long, they seem quite good already.
On Sun, Aug 30, 2015 at 10:25 AM, Sander Dieleman [email protected] wrote:
I was actually planning to have a look at it today. But I can do a new PR after you merge them as well, up to you.
— Reply to this email directly or view it on GitHub https://github.com/Lasagne/Recipes/pull/14#issuecomment-136148361.
Sure :) I'll try to sort out @f0k's remarks by tonight.
Just tried initializing the biases to -2.0
instead of -4.0
, but this severely slows down convergence and seems to make things slightly unstable as well. So I'll stick with -4.0
. It's no use trying to match the parameters against the paper's anyway, because a lot of them aren't even mentioned (learning rate etc.).
I will have a go at reproducing (a subset of) figure 2 though!
I tried changing some parameter values to match the paper better, but in the end I decided to leave most of them as they were because it just caused trouble :) Haven't gotten around to doing anything with the figure either, if anyone wants to do that in a separate PR feel free.
It's no use trying to match the parameters against the paper's anyway, because a lot of them aren't even mentioned (learning rate etc.).
Yes, if it's difficult to find settings that work, we should just leave it at that. I didn't expect that to cause trouble. Thanks for trying and documenting it! Looks good to merge for me. Eben, you could add an Issue about reproducing Figure 2, maybe somebody is interested in trying that sometime.