ml-agents icon indicating copy to clipboard operation
ml-agents copied to clipboard

Multi-agent vectorized environment support for gym wrapper.

Open abhayraw1 opened this issue 4 years ago • 12 comments

Is your feature request related to a problem? Please describe. I currently installed ml-agents to use for a research project. My use case is a multi-agent scenario involving coordination amongst the agents. The gym wrapper however only allows for single agents to be used as of now.

Also having a bunch of environments in parallel that can be handled by the gym wrapper would be awesome!

Describe the solution you'd like Currently the low level python API, the UnityEnvironment class provides access to multiple agents. Exposing this to the gym wrapper would be really helpful. One of the implementations for a multiagent environment that I personally like was in ray-rllib here

For multiple environments, a vectorized approach could work like the openAI's VecEnv link

As I am going to be developing workarounds for my project, I would like to contribute towards this goal. As of now I am going to developing the solution to this as close as the ray-rllib 's implementation. Inputs and critiques are welcome!

abhayraw1 avatar Jun 13 '20 19:06 abhayraw1

Hi, ray-rllib link is broken

Hsgngr avatar Jun 14 '20 00:06 Hsgngr

@Hsgngr I updated the link!!

abhayraw1 avatar Jun 14 '20 12:06 abhayraw1

As of now I have zeroed in to this snippet of code that raises an exception whenever the number of agents is greater than one in the UnityEnvironment

https://github.com/Unity-Technologies/ml-agents/blob/3c2fa4d8d1cd981e9cef6b2e2fdb2f77757983c3/gym-unity/gym_unity/envs/init.py#L267-L272

Is there any reason why this is the norm? Is it just to make sure that the environments are compatible with the standard RL libraries like OpenAI's baselines and Dopamine?

abhayraw1 avatar Jun 14 '20 12:06 abhayraw1

Hi @xiaomaogy, So I've currently managed to get the data from multiple agents by bypassing the above mentioned check. For stepping I change the following line https://github.com/Unity-Technologies/ml-agents/blob/3c2fa4d8d1cd981e9cef6b2e2fdb2f77757983c3/gym-unity/gym_unity/envs/init.py#L169

using (-1, action_size) instead of (1, action_size). The check for whether the number of agents match is done in the set_actions method of the UnityEnvironment class so I didn't enforce any checks as of now. https://github.com/Unity-Technologies/ml-agents/blob/20527d10121b68c60b490468eafed0465df498e3/ml-agents-envs/mlagents_envs/environment.py#L338-L345

The issue that I am facing now is that when the episode ends, I only get the observations from the agent that is responsible for termination.

My use case however is quite different. I want the episode to be agent dependent, and even if some agent might "die", the rest of the agent should continue. The dead agent would spawn somewhere else in the map! Is this achievable? And could you give me some pointers on some of the possible pitfalls that I should look out for. Thanks in advance!!

abhayraw1 avatar Jun 15 '20 21:06 abhayraw1

This snippet is actually responsible for sending the observations when some agent reaches its terminal condition. https://github.com/Unity-Technologies/ml-agents/blob/3c2fa4d8d1cd981e9cef6b2e2fdb2f77757983c3/gym-unity/gym_unity/envs/init.py#L175-L180

In a multi-agent/vectorized setting would it be okay to return the observations/rewards/dones considering both decision_step and terminal_step rather than only one?

P.S.: One workaround that I can think of for my particular use case is to not call the EndEpisode() method in the C# script for the agents. But then, I do need the information whether the agent terminates or not. I don't know if that makes sense!

abhayraw1 avatar Jun 15 '20 22:06 abhayraw1

In a similar question to the original poster: is there any reason why the multi agent isn't supported? Code changes to make the gym-like API support isn't difficult so I'm trying figure out whether I'm missing something or this is purely conceptual difficulty.

laszukdawid avatar Nov 15 '20 18:11 laszukdawid

Since I need this for my own purpose, I've added my own wrapper (based on Unity's wrapper) which can be found here https://github.com/laszukdawid/ai-traineree/blob/master/ai_traineree/tasks.py#L101 (or with associated commit https://github.com/laszukdawid/ai-traineree/commit/39dcf3188d0b14853508c48f63416a2df7a94a7e).

I'd appreciate any reply from Unity's team. I'm planning on adding more support for Multi Agent use cases and wouldn't mind contributing a bit.

laszukdawid avatar Nov 18 '20 04:11 laszukdawid

@laszukdawid can you provide a simple collab to learn how to use your wrapper? i am in the dark with the Python API and Gym Wrapper's outdated documentation.

Ademord avatar Jun 05 '21 15:06 Ademord

For log continuuity: I replied to Ademord on an issue they created in my deep reinforcement learning repository. I'm happy to assist with things I can assist.

laszukdawid avatar Jun 06 '21 00:06 laszukdawid

Are there any updates for this issue? It would be great to see support for Ray's RLlib in ML Agents - particularly multi-agent reinforcement learning.

dynamicwebpaige avatar Apr 11 '22 22:04 dynamicwebpaige

Sorry, we are not currently supporting multi-agent vectorized env for gym wrapper.

xcao65 avatar Apr 12 '22 20:04 xcao65

Understood, and thanks for the update, @xcao65!

dynamicwebpaige avatar Apr 13 '22 04:04 dynamicwebpaige

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

github-actions[bot] avatar Nov 04 '22 20:11 github-actions[bot]