macad-gym
macad-gym copied to clipboard
Unable to use RL algorithms with continuous action space
Hi @praveen-palanisamy
I have been working on macad-gym successfully over the past few months using PPO and many other algorithms. Now I am trying to use DDPG using RLlib which requires continuous action space.
I have changed the boolean "discrete_actions": False within environment config, but its still a issue since the policy function is passing Discrete(9) and I do not know the alternative for continuous action space.
I also followed the guide mentioned here but now its giving me the following error. error.txt
Any help in this regard would be appreciated. Thanks.
Hi @AizazSharif ,
Good to hear about your continued interest and experiments on top of macad-gym.
You did the right thing w.r.t macad-gym i.e, setting "discrete_actions": False
to make the environment use continuous action space. Now w.r.t the agent's policy, the policy network needs to generate continuous-valued actions of appropriate shape.
For example, you would create a PPO/DDPG policy with policy network's output and shape as ~ Box(2)
instead of Discrete(9)
.
Where the Box(2)
refers to two continuous valued outputs (one for steering, another for throttle).
From the error logs, it looks like the DDPG's critic network's concat operation is failing to concat tensors of different rank: ValueError: Shape must be rank 4 but is rank 2 for 'car1/critic/concat' (op: 'ConcatV2') with input shapes: [?,84,84,3], [?,8]
This operation is defined in RLLib's DDPG (ddpg_policy.py
) which you need to configure to generate actions of appropriate shape and range (using the example above).
Hope that helps.
Thanks for the reply @praveen-palanisamy. I will look into it and let you know.
I also wanted to ask whether it is possible to have one agent with discrete and another with continuous actions in a same driving scenario? @praveen-palanisamy As an example, one car is trained using PPO and another using DDPG.
Hi @AizazSharif , Missed your new question until now. Yes, you can use different algorithms per agent/car. The RLLib example agents in the MACAD-Agents repository is a good starting point for Multi-Agent autonomous driving setting. You can refer to this sample for a generic, PPO, DQN sample using RLLib
Hi @praveen-palanisamy Thanks for the reply. I have looked at these examples but they have the same type of action space agents in an environment. I couldn't find any example implementation where both discrete and continuous agents are running in a multi-agent setting.