Cédric Bovar
Cédric Bovar
I noticed something in your C# code: the 4th dimension of your input/output data should be the batch size. But I see it's the whole dataset size. You need to...
You can get some inspiration from [this line](https://github.com/cbovar/ConvNetSharp/blob/master/Examples/MnistDemo/Program.cs#L54) in the mnist example. I'm travelling and I don't have access to a computer, typing code with a mobile is not ideal
It's not my priority right now (I am currently working on RNN layers). If I understand inception layer, it is a maxout + conv 1x1 + conv 3x3 + conv...
I haven't experienced that with ConvNetSharp. Is it possible your application is still running in background and this prevents you from overwriting it ?
Hi, I think that you are not giving one-hot encoded labels as Y while training. i.e. LabelVolume should have a shape of [1, 1, NumberOfCategories, BatchSize] like it is done...
Yes you can do mini batch. And you are correct: the 4th dimension of the Shape is the batch size
I'm not planning on implementing it any time soon. But PR are welcome of course :)
For DQN you can check out [this repo](https://github.com/dubezOniner/Deep-QLearning-Demo-csharp). It should be easy to adapt it to newer version of ConvNetSharp. I have worked on LSTM. I will eventually release a...
I also see a DQN using WPF for display [in this fork](https://github.com/jankrib/ConvNetSharp/tree/deep-q-demo)
Maybe you could also try on a very simple task, reproduce the input: * 0 -> 0 * 1 -> 1 It may fit in an unit test.