multiagent_mujoco
multiagent_mujoco copied to clipboard
Benchmark for Continuous Multi-Agent Robotic Control, based on OpenAI's Mujoco Gym environments.
The new ENVs do not work "manyagent_swimmer", "manyagent_ant", "coupled_half_cheetah" example ```python $ py test.py # 'scenario': 'manyagent_swimmer' [HyperEdge({rot1, rot0}), HyperEdge({rot1, rot2}), HyperEdge({rot3, rot2}), HyperEdge({rot3, rot4}), HyperEdge({rot5, rot4}), HyperEdge({rot5, rot6}), HyperEdge({rot7,...
I was checking the code of the environment and noticed that an action wrapper is always used to normalize the actions, the code used for this is: ``` class NormalizedActions(gym.ActionWrapper):...
My PROBLEM is if multiagent_mujoco can work with higher version "gym",like 0.17.2 or 0.21.0(newest) I want to create a marl environment that include some marl benchmark package. "mamujoco" need gym...
I have trouble understanding where the list of action’s vector for each agent (that you pass to the MujocoMulti env ) is reassembled into the single agent Mujoco env action...
Hey, thank you for such a great addition to multi-agent cooperative environment. I am playing with the environment and notice that the environment's action space is bounded within [-1,1]. But...
This allows mode='rgb_array' and 'depth_array' to return the array as in the original single-agent mujoco gym env. These modes are faster than mode='human'.
I notice that the function `close` is **raise NoImplementedError** at class `MujocoMulti` in `mujoco_multi.py`. I want to know if it is means to be implemented by ourselves? If so, how...