multiagent-particle-envs
multiagent-particle-envs copied to clipboard
Code for a multi-agent particle environment used in the paper "Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments"
I am referring to the following function: def is_collision(self, agent1, agent2): delta_pos = agent1.state.p_pos - agent2.state.p_pos dist = np.sqrt(np.sum(np.square(delta_pos))) dist_min = agent1.size + agent2.size return True if dist < dist_min...
Hi, I am using your environment for mutliagent algorithm's test. Can I get some more information about observation,, action, reward's information of environments you provide. I have been using simple...
Previously good_agents got more reward by being far from the food, i thought they should be punished for being far from the food.
at simple_tag.py line 11 and 12, which means there are 1 good agents, and 3 adversaries line 11 num_good_agents = 1 line 12 num_adversaries = 3 line 17 for i,...
__.py:23: UserWarning: This code base is no longer maintained, and is not expected to be maintained again in the future. For the past handful of years, these environments been maintained...
When I install the MPE environment, I will report an error when I use the `bin/interactive.py --scenario simple.py` command:  May I ask why this happened?
Hey, Is the observation fully observable or partially observable?