ruby-fann icon indicating copy to clipboard operation
ruby-fann copied to clipboard

Anyone able to get this to work?

Open inspire22 opened this issue 9 years ago • 7 comments

I'm having the same problem as this guy: http://stackoverflow.com/questions/9746062/training-neural-network-in-ruby

Training doesn't do very well despite pretty predictive data: Max epochs 1000000. Desired error: 0.0010000000. Epochs 1. Current error: 0.7620287538. Bit fail 118. Epochs 100000. Current error: 0.4796932936. Bit fail 195. Epochs 200000. Current error: 0.4797259271. Bit fail 195. Epochs 300000. Current error: 0.4799208045. Bit fail 195. Epochs 400000. Current error: 0.4797225893. Bit fail 195. Epochs 500000. Current error: 0.4799278677. Bit fail 195. Epochs 600000. Current error: 0.4797178209. Bit fail 195. Epochs 700000. Current error: 0.4796698689. Bit fail 195. Epochs 800000. Current error: 0.4795957506. Bit fail 194. Epochs 900000. Current error: 0.4795432389. Bit fail 194. Epochs 1000000. Current error: 0.4797499180. Bit fail 195.

Everything is normalized obviously, otherwise we get segmentation faults.

For each train, output is always a random number that stays the same no matter the input: 2.0.0-p195 :016 > $fann.run([1]) => [0.02257500380258881] 2.0.0-p195 :017 > $fann.run([1,5,3,2,1,4]) => [0.02257500380258881] 2.0.0-p195 :018 > $fann.run([0.1, 0.5, 0.5, 0.5, 0.1, 0.1]) => [0.02257500380258881]

If I re-run training, the result number is different

inspire22 avatar Jun 28 '15 07:06 inspire22

Just to reply to myself, debugged with the tic-tac-toe project (clever!) but it is broken as well - fitness always returns 0.0

  • this project is dead!

[{:move=>[0, 0, 0, 0, 0, 0, 0, 0, 1], :fitness=>0.0}, {:move=>[0, 0, 0, 0, 0, 0, 0, 1, 0], :fitness=>0.0}, {:move=>[0, 0, 0, 0, 0, 0, 1, 0, 0], :fitness=>0.0}, {:move=>[0, 0, 0, 0, 0, 1, 0, 0, 0], :fitness=>0.0}, {:move=>[0, 0, 0, 1, 0, 0, 0, 0, 0], :fitness=>0.0}, {:move=>[0, 0, 1, 0, 0, 0, 0, 0, 0], :fitness=>0.0}, {:move=>[0, 1, 0, 0, 0, 0, 0, 0, 0], :fitness=>0.0}, {:move=>[1, 0, 0, 0, 0, 0, 0, 0, 0], :fitness=>0.0}] [0, 0, 0, 0, 0, 0, 0, 0, 1]

inspire22 avatar Jun 29 '15 06:06 inspire22

I just ran into the same problem as well while experimenting with the library. Too bad it seems to be unmaintained at the moment :/

groe avatar Oct 28 '15 23:10 groe

Yeah this project is dead, I can't get it to run an xor example.

require 'ruby-fann'
train = RubyFann::TrainData.new(
  inputs: [
    [0, 0], [0, 1], [1, 1], [1, 0]
  ], 
  desired_outputs: [
    [0], [1], [1], [0]
  ]
);

fann = RubyFann::Standard.new(num_inputs: 2, hidden_neurons: [10, 10, 10], num_outputs: 1)
fann.train_on_data(train, 1000, 10, 0.01) # 1000 max_epochs, 10 errors between reports and 0.1 desired MSE (mean-squared-error)

puts '0,0', fann.run([0, 0])
puts '0,1', fann.run([0, 1])
puts '1,0', fann.run([1, 0])
puts '1,1', fann.run([1, 1])

bnolan avatar May 16 '16 10:05 bnolan

It works for me, using @bnolan 's example above:

⚠️ Disclaimer ⚠️

I'm new to NN and I don't know enough of NN theory. In the following example, the input data set is very small and you should not run the NN against the training data. Also that training could fail, so you may need to repeat the process

require 'ruby-fann'

inputs          = [ [0, 0], [0, 1], [1, 0], [1, 1] ]
desired_outputs = [    [0],    [1],    [1],    [0] ] # proper XOR

hidden_neurons_number = ((inputs.first.size + desired_outputs.first.size) ** 0.5).round + 1

train = RubyFann::TrainData.new(inputs: inputs, desired_outputs: desired_outputs)

fann = RubyFann::Standard.new(num_inputs: 2, hidden_neurons: [hidden_neurons_number], num_outputs: 1)
fann.train_on_data(train, 1000, 10, 0.01) # 1000 max_epochs, 10 errors between reports and 0.1 desired MSE (mean-squared-error)

inputs.each do |input|
  puts "#{input} => #{fann.run(input)}"
end
ruby nn_xor_test.rb
Max epochs     1000. Desired error: 0.0099999998.
Epochs            1. Current error: 0.2503120899. Bit fail 4.
Epochs           10. Current error: 0.2499401718. Bit fail 4.
Epochs           20. Current error: 0.2423011661. Bit fail 4.
Epochs           30. Current error: 0.1863817573. Bit fail 3.
Epochs           40. Current error: 0.1407767683. Bit fail 1.
Epochs           50. Current error: 0.0858674496. Bit fail 1.
Epochs           60. Current error: 0.0190158486. Bit fail 0.
Epochs           62. Current error: 0.0088274833. Bit fail 0.
[0, 0] => [0.03469912811834021]
[0, 1] => [0.9446454636507801]
[1, 0] => [0.8884668126782995]
[1, 1] => [0.09336504889216161]

$ ruby nn_xor_test.rb
Max epochs     1000. Desired error: 0.0099999998.
Epochs            1. Current error: 0.2500002086. Bit fail 4.
Epochs           10. Current error: 0.2500554621. Bit fail 4.
Epochs           20. Current error: 0.2129755616. Bit fail 3.
Epochs           30. Current error: 0.1599631906. Bit fail 3.
Epochs           40. Current error: 0.1051470712. Bit fail 1.
Epochs           50. Current error: 0.0148407854. Bit fail 0.
Epochs           52. Current error: 0.0080423718. Bit fail 0.
[0, 0] => [0.103887281802755]
[0, 1] => [0.9576252736384885]
[1, 0] => [0.9506385048123076]
[1, 1] => [0.11616178258391585]

tagliala avatar Dec 17 '16 13:12 tagliala

@tagliala how did you come to the formula for hidden_neurons_number?

Also why only a number? in the given example, it is an array [2, 8, 4, 3, 4]. I'm not sure how it relates to your formula

heri avatar May 17 '17 16:05 heri

@heri Sorry but as I mentioned before, I don't know enough of NN theory.

I've found that formula here

Also why only a number?

As far as I understand, single number: 1 layer image

Multiple numbers: something like this image

tagliala avatar May 17 '17 16:05 tagliala

@tagliala Thanks. It looks like the + 1 can be changed up to + 10 and also one needs to experiment with the number of hidden layers.

I have errors (more than 1_000_000 !) so I will experiment with different values to see if ruby-fann can work with my dataset.

Also what's not mentioned by others is that we need to choose carefully select the data features.

heri avatar May 17 '17 22:05 heri

On the latest release (2.0), this returns acceptable values:

[0, 0] => [0.04931549037519472]
[0, 1] => [0.9527720517354428]
[1, 0] => [0.9503237704923142]
[1, 1] => [0.038072909787401965]

The XOR network test in RubyFannFunctionalTest (test_training_xor) also returns acceptable values, although it uses a sigmoid_symmetric activation function; hence the values of -1.0 and +1.0 .

git-steven avatar Mar 21 '24 21:03 git-steven