deep-q-learning icon indicating copy to clipboard operation
deep-q-learning copied to clipboard

Question: Is this some form of reward engineering?

Open WorksWellWithOthers opened this issue 3 years ago • 1 comments

This would break in environments that return the state as more/less than 4 values for unpacking.

  1. If not essential can we just remove this?
  2. If it's essential, would someone explain why and/or reference the paper for this? This seems specific to CartPole. I wasn't sure if the implementation's goal was to only solve CartPole.
r1 = (env.x_threshold - abs(x)) / env.x_threshold - 0.8  
r2 = (env.theta_threshold_radians - abs(theta)) / env.theta_threshold_radians - 0.5  
reward = r1 + r2

WorksWellWithOthers avatar Dec 05 '20 00:12 WorksWellWithOthers

@WorksWellWithOthers This is indeed a form of reward engineering and is specific to CartPole to turn the returned state into a numeric reward. Other environments would not need this specifically, and potentially would return a distinct reward already.

scprotz avatar Feb 01 '21 16:02 scprotz