encog-java-core
encog-java-core copied to clipboard
Using NN for classification data from Accelerometer
Hello I want to recognize gestures using accelerometer. After a couple of days I've reached the point where I received first results. But unfortunatelly they are rather poor. I stick to the rules about building network that was described to some article about using NN to recognizing. (paper title: "High Accuracy Human Activity Monitoring using Neural Network")
int networkInput = xAxis.size(); // about 100
network = new BasicNetwork();
//network.addLayer(new BasicLayer(null, true, networkInput));
network.addLayer(new BasicLayer(new ActivationTANH(), true, 3));
network.addLayer(new BasicLayer(new ActivationTANH(), true, 7));
network.addLayer(new BasicLayer(new ActivationLinear(), true, 3));
network.getStructure().finalizeStructure();
network.reset();
and training...
final Propagation train = new ResilientPropagation(network, trainingSet);
int epochsCount = 100;
for(int epoch = 1; epoch > epochsCount; epoch++ ){
train.iteration();
}
train.finishTraining();
How to use MLDataSet is so far for me one big unundersanding, but go further. I've tried two way of creating data; one is data[][] = {double[] xAxis, double[] yAxis, double[]zAxis} and second data[][] = {double[] euclidesianOfThreeAxis} and look like the second works better. Then I cannot use supervised trainging because I have only one sample , so i put null as ideal MLDataSet (Of course trainingSet is done before I train network^^).
dataSet = new double[baseSize][];
int i = 0;
Iterator<Float> xIter = baseX.iterator();
Iterator<Float> yIter = baseY.iterator();
Iterator<Float> zIter = baseZ.iterator();
while(xIter.hasNext()){
dataSet[i] = new double[]{ Math.sqrt(Math.pow(xIter.next(),2) +
Math.pow(yIter.next(),2) + Math.pow(zIter.next(),2))};
//dataSet[i] = new double[]{xIter.next(), yIter.next(), zIter.next()};
i++;
}
NeuralDataSet trainingSet = new BasicNeuralDataSet(dataSet,null);
but for validation set I have to put 'IdealDataSet' as arg, otherweise compiler throw error, so I put twice the same dataSet.
MLDataSet validationSet = new BasicMLDataSet(input,input);
The 'trainingSet' means for me the new gesture that I try to recognize among others. I save in memory' trainingSet' and 'network' as the constans that don't change over the recognizing process. The 'validationSet' is created for each saved gesture for comparision with the 'trainingSet'. I use function 'calculateError' to count similarity between trainingSet and validationSet. But the results are very bad. That means algorithm make huge mistakes, where effectiveness (corectness) of error is under 25%.
double error = network.calculateError(validationSet) ;
The length of vectors of accelerations are the same, thats mean len(AxisX)=len(AxisY)=len(AxisZ) and also len(savedGestures)=len(gestureToRecognize)
Can you help me and imporve my code? I will be so much thankfull for that. I think the mistake is somewhere in the way that I put my vector to BasicMLDataSet or mayby in the way of training
The full code can be found here: https://github.com/GitHubMurt/GestureRecognition/blob/newBranch/app/src/main/java/RecognitionTools/AlgNN.java