reinforcement-learning-an-introduction icon indicating copy to clipboard operation
reinforcement-learning-an-introduction copied to clipboard

Redundant discount factor

Open c-lyu opened this issue 2 years ago • 1 comments

Issue Description: The reproduction code for the Gridworld environment, located here, appears to have an inconsistency regarding the implementation of the discount factor in the policy evaluation. According to Sutton's book, there is no mention of multiplying the value by a discount factor here.

Expected Behavior:

Input π
Initialize an array V(s) = 0, for all s ∈ S^+ 
Repeat
    ∆ ← 0
    For each s ∈ S:
        v ← V(s)
        V(s) ← ∑_a π(a | s) ∑_{s', r} p(s', r | s, a) [r + γ V(s')]
        ∆ ← max(∆, |v − V(s)|)
until ∆ < θ
Output V ≈ v_π

c-lyu avatar Jul 13 '23 12:07 c-lyu

I agree. The original author of this code base has an extraneous discount factor in the policy_evaluation function. The correct place of the discount factor is in the GridWorld.bellman_expectation function, and only in there.

NathanZorndorf avatar Dec 04 '23 00:12 NathanZorndorf