synaptic
synaptic copied to clipboard
Understanding LSTM training
I'm sorry to ask general questions about neural networks here, but I've been reading for ages, and I can't seem to figure this out.
Say I have some data representing amount of apples on my apple tree, measured every year: [42, 12, 20, 53, 18]. If I want to predict next years apple count, how do I implement this with Synaptic?
I read the source for the Wikipedia example (thanks for providing it), but there's too much going on for me to comprehend the basics.
I don't know if LSTM training would be something that would be applicable for this sort of thing. Maybe it is. The first thing that comes to mind to solve that problem would be to calculate a linear regression.
Given the years 1-5 along an X axis and the apple yield on the Y axis, you can calculate the apple yield using this formula: Y = 0.7X + 31.1. So for the 6th year, you'd yield 26.9 apples.
In actuality I'm not trying to predict apple count but instead trying to find a way to predict a very large, seemingly random data set (I hope its not random). I just wanted to see a simple example of training LSTM.
I also need this. I feel like the support and documentation for LSTMs is severely lacking.
You might want to take a look at Neataptic, it has some more LSTM examples. It also offers a clear() training option which basically tells the network to not remembering the previous training iteration.