stable-baselines3 icon indicating copy to clipboard operation
stable-baselines3 copied to clipboard

[Question] How to pass a varying gamma to DQN or PPO during training?

Open rariss opened this issue 1 year ago • 6 comments
trafficstars

❓ Question

Reinforcement learning and the SB3 implementations apply the typical constant gamma for discounting future values when learning. This is fine for discrete time environments where for each action the future value is discounted as a constant for each step.

I have a custom gym environment where my environment steps in discrete decision epochs, but each action takes a different amount of time. Discounting future values at a constant rate is therefore incorrect. What I need to do is discount future values by a gamma that is a function of the time it takes to conduct the action in the environment.

Is there anyway to pass in gamma as a function or as tensors that map to each (s, a, s’, r) tuple during learning? Maybe possible with existing features or callbacks? I’d like to avoid forking the repository if possible.

Any input would be appreciated as I’ve been stuck on this for some time. Thanks in advance!

Checklist

rariss avatar Apr 10 '24 16:04 rariss

Hello, in your case, the best is to fork sb3 and adapt the rollout buffer/ppo. This is too custom to be solved by callbacks or subclassing.

araffin avatar Apr 10 '24 17:04 araffin

Thanks for your quick response.

So if I understand, you’re suggesting to augment the replay buffer to collect time-varying gammas with each rollout, then in the PPO loss function, use the gammas from the replay buffer?

rariss avatar Apr 10 '24 17:04 rariss

you’re suggesting to augment the replay buffer to collect time-varying gammas with each rollout, then in the PPO loss function, use the gammas from the replay buffer?

correct, that would be actually a gamma per timesteps to make it simpler to use (and make it work with VecEnv for instance), and you would need to use that value everytime gamma is used (notably for the GAE computation)

araffin avatar Apr 10 '24 18:04 araffin

Got it. Yes absolutely that gamma would be the discount factor for a “step” or discrete decision epoch.

rariss avatar Apr 10 '24 18:04 rariss

If I augment the replay buffer, does all of is content get passed to the learn function? Meaning I dont need to modify the input for the training update functions, I just need to extract the gammas from the batch of step buffer data and use them in the GAE discount?

rariss avatar Apr 10 '24 18:04 rariss

Meaning I dont need to modify the input for the training update functions,

you need to modify the named tuple that represent a transition and modify the GAE computation accordingly yes

araffin avatar May 10 '24 13:05 araffin