Juan Cazala

Results 84 comments of Juan Cazala

There are no limits for the size of the inputs/outputs of an entry in a dataset. They just have to match the size of the network (inputs = size of...

If using an optimized network (which is the behaviour by default) the extra inputs should be ignored, as if they didn't existed. In an unoptimized network it should throw an...

I don't know if this helps, but the DSR example is a classification problem that uses cross entropy as cost function: http://caza.la/synaptic/#/dsr

It is only used to compute the error during training

Do you have an example of the network? Here I tried a simple Perceptron + XOR, the first time you run it, it trains the network, stores it in LS,...

Hi, I did an example of this a couple years ago, it's not live anymore, but there repo [is out there](https://github.com/cazala/synaptic-wikipedia). It needs php to run, sadly. Basically what that...

Yes, the latter activation will be used as the gain

You could try using a LSTM instead of a Perceptron, they work better for sequence prediction since they have memory cells where they store information from previous activations.

like `new Architect.LSTM(6, 5, 1)`, you could play a little bit changing the amount of memory blocks (the middle argument `5`), but from my own experience, I would stick to...

Nice! feel free to add that example to the Normalization 101 article in the wiki (: is that piece yours? You should publish it as a package (: