stable-baselines3
stable-baselines3 copied to clipboard
[Question] How to pass a varying gamma to DQN or PPO during training?
❓ Question
Reinforcement learning and the SB3 implementations apply the typical constant gamma for discounting future values when learning. This is fine for discrete time environments where for each action the future value is discounted as a constant for each step.
I have a custom gym environment where my environment steps in discrete decision epochs, but each action takes a different amount of time. Discounting future values at a constant rate is therefore incorrect. What I need to do is discount future values by a gamma that is a function of the time it takes to conduct the action in the environment.
Is there anyway to pass in gamma as a function or as tensors that map to each (s, a, s’, r) tuple during learning? Maybe possible with existing features or callbacks? I’d like to avoid forking the repository if possible.
Any input would be appreciated as I’ve been stuck on this for some time. Thanks in advance!
Checklist
- [X] I have checked that there is no similar issue in the repo
- [X] I have read the documentation
- [X] If code there is, it is minimal and working
- [X] If code there is, it is formatted using the markdown code blocks for both code and stack traces.
Hello, in your case, the best is to fork sb3 and adapt the rollout buffer/ppo. This is too custom to be solved by callbacks or subclassing.
Thanks for your quick response.
So if I understand, you’re suggesting to augment the replay buffer to collect time-varying gammas with each rollout, then in the PPO loss function, use the gammas from the replay buffer?
you’re suggesting to augment the replay buffer to collect time-varying gammas with each rollout, then in the PPO loss function, use the gammas from the replay buffer?
correct, that would be actually a gamma per timesteps to make it simpler to use (and make it work with VecEnv for instance), and you would need to use that value everytime gamma is used (notably for the GAE computation)
Got it. Yes absolutely that gamma would be the discount factor for a “step” or discrete decision epoch.
If I augment the replay buffer, does all of is content get passed to the learn function? Meaning I dont need to modify the input for the training update functions, I just need to extract the gammas from the batch of step buffer data and use them in the GAE discount?
Meaning I dont need to modify the input for the training update functions,
you need to modify the named tuple that represent a transition and modify the GAE computation accordingly yes