dissecting-reinforcement-learning
dissecting-reinforcement-learning copied to clipboard
Part 3, TD(lambda): trace_matrix should be reset to zeroes at the beginning of each epoch
I believe that in part 3, TD(lambda), the trace_matrix should be reset to zeros at the beginning of each epoch. Otherwise the utility of a state may be updated even if the state is not part of the current trace.
Also, I believe that the decay of the trace_matrix should be moved to just before the line: trace_matrix[observation[0], observation[1]] += 1