DWA-RL
DWA-RL copied to clipboard
wondering if it would be possible for you to share the full code with me.
How are you @NithishkumarS . I have sent you a gmail, Thank you for your response. Here is a list which is wanted.
-
DWA-RL novel reward, and static reward In your paper, you only consider about dynamic obstacles by novel reward, but you have trained a agent in static and dynmaic environment. Could you share the static reward and DWA-RL novel reward?
-
DWA-RL's PPO hyper-parameters (epoch, learning-rate, etc..)
Opened code has not 'stable_baselines.py', Could you share it?
Thank you :>.