pytorch-neat icon indicating copy to clipboard operation
pytorch-neat copied to clipboard

Are the weights optimized by the autograd of torch?

Open kevin031060 opened this issue 4 years ago • 5 comments

Hi, I understand that the topology is optimized through EA. How the weights are optimized. EA method or backpropagating by torch?

kevin031060 avatar Aug 27 '19 01:08 kevin031060

The weights are optimized in an evolutionary fashion as well.

Note the following in config files:

# Float between 0.0 and 1.0 - rate at which a connection gene will be mutated
CONNECTION_MUTATION_RATE = 0.80
# Float between 0.0 and 1.0 - rate at which a connections weight is perturbed (if connection is to be mutated) 
CONNECTION_PERTURBATION_RATE = 0.90

ddehueck avatar Aug 27 '19 13:08 ddehueck

It should be possible to also update the weights with some training procedure before fitness evaluation. It could be interesting to see if something like the Baldwin Effect is evident in the resulting fitness distribution.

ddehueck avatar Aug 27 '19 18:08 ddehueck

Thank you and it's really a cool job. Is there any possibility that uses backpropagating method to train the generated network after using EA to determine its structure. In other words, can we use the common backpropagating method to train an arbitrarily connected network? I think it can be more efficient if we use the backpropagating.

kevin031060 avatar Aug 28 '19 02:08 kevin031060

Yes, that would absolutely be possible. I'd be curious to see your results if you do so - we could add such an experiment to this repo perhaps.

ddehueck avatar Aug 28 '19 02:08 ddehueck

I'll try it and hope there would be some progress.

kevin031060 avatar Aug 28 '19 02:08 kevin031060