Kai Arulkumaran
Kai Arulkumaran
Good luck! Finally got round to reading the paper and noticed some extras in the appendix. Seems like for completeness we'll need to add a stochastic ALE setting for this...
FYI there's [another (new) paper](https://arxiv.org/abs/1606.04695) from DeepMind with similar goals...
@iassael couple of questions about your layer. Can it use more complicated heads (like the dueling head)? How does it work on picking a new head for a new episode...
@iassael I'm focusing on some of the other components at the moment so I'm not sure I'll get to this any time soon, but feel free to give it a...
Sorry, I don't understand the issue?
No, as an autoencoder it should reconstruct the input. Unfortunately it has been a very long time since I've used Lua and Torch7, so I'm not going to attempt to...
If you're looking at Table 1, it looks like the shallow FC WTA-AE uses 2000 units and 5% sparsity. They don't provide much training details - optimiser, minibatch size, number...
I've now added code that visualises the decoder weights at the end of training, so it's preferable to see if you can tune training to match Figure 1 in the...
Closing for now as Ubuntu 16.04 support for lots of software is still questionable.
I am no longer actively maintaining this repo but am able to take contributions. If you are able to add updated Dockerfiles that build and run successfully then I will...