Youtube-Code-Repository icon indicating copy to clipboard operation
Youtube-Code-Repository copied to clipboard

[TensorFlow2] Critic Loss Calculation for actor_critic

Open srihari-humbarwadi opened this issue 2 years ago • 0 comments

If I understand correctly, the code in tensorflow2/actor_critic.py implements the One-step Actor-Critic (episodic) algorithm given on page 332 of RLbook2020 by Sutton/barto (picture given below).

image

Here we can see that the critic parameters w are updated only using the gradient of the value function for the current state S which is represented as grad(V(S, w)) in the pseudocode shown above. The update skips the gradient of the value function for the next state S'. This can again be seen in the pseudocode above, there is no grad(V(S', w)) present in the update rule for critic parameters w.

In the code given below, including state_value_, _ = self.actor_critic(state_) (L43) inside the GradientTape would result in grad(V(S', w)) appearing in the update for w, which contradicts the pseudocode shown above.

https://github.com/philtabor/Youtube-Code-Repository/blob/1ef76059bf55f7df9ccc09fce0e0bfb7c13e89bd/ReinforcementLearning/PolicyGradient/actor_critic/tensorflow2/actor_critic.py#L40-L45

Please let me know if there are some gaps in my understanding!

srihari-humbarwadi avatar Jan 25 '22 16:01 srihari-humbarwadi