Multi-Agent-Deep-Deterministic-Policy-Gradients
Multi-Agent-Deep-Deterministic-Policy-Gradients copied to clipboard
the network parameters about target critic and critic network
trafficstars
self.target_critic.load_state_dict(critic_state_dict)
above code seems make target critic network's parameter always be same as the critic network's. So what is the purpose? making the network learn more slowly?
Hope somebody help me!
It seems that the parameters of target_critic network should be updated as target_critic = tau*critic + (1-tau)*target_critic. And the actual critic networks is updated according to gradient of loss function. I am working on this code these days. If I am wrong, please correct it and feel free to contact me.