multiagent-particle-envs
multiagent-particle-envs copied to clipboard
cannot run interactive.py
Traceback (most recent call last):
File "/home/shy/桌面/multiagent-particle-envs-master/bin/interactive.py", line 6, in
This may be a problem with the gym version.You try to change the gym version to 0.10.5 and see if it works.
From README of gym:
2019-02-06 (v0.11.0) remove gym.spaces.np_random common PRNG; use per-instance PRNG instead.
Have you solved it yet? I'm also working on the same problem. My gym version is v0.14.0 , my python version is 3.6.x. Are those versions over high that will forbid running it ?
Have you solved it yet? I'm also working on the same problem. My gym version is v0.14.0 , my python version is 3.6.x. Are those versions over high that will forbid running it ?
i can run it with the gym version at 0.10.5.
change to an old version (0.10.5) is ok, you'd better change like this #53
remove from gym.spaces import prng and replace random_array = prng.np_random.rand(self.num_discrete_space) with
random_array = np.random.RandomState().rand(self.num_discrete_space)
@christopherhesse please close
This repo should probably have the correct gym version required in https://github.com/openai/multiagent-particle-envs/blob/master/setup.py#L12 instead of just choosing the latest.
@christopherhesse thanks for quick reply can u help me out with the following code
OBSERVATION SPACE IS GIVING ERROR
import numpy as np import gym
from keras.models import Sequential from keras.layers import Dense, Activation, Flatten from keras.optimizers import Adam
from rl.agents.dqn import DQNAgent from rl.policy import EpsGreedyQPolicy from rl.memory import SequentialMemory
Building the environment
env = make_env('simple_adversary')
np.random.seed(0) env.seed(0)
Extracting the number of possible actions
num_actions = env.action_space[0].n # first agent actions changing this vaue will be another agent print(num_actions)
Layers
agent = Sequential()
print(env.observation_space[0].shape,type(env.observation_space[0]))
agent.add(Flatten(input_shape =(1, ) + (env.observation_space[0]).shape))
agent.add(Dense(16)) agent.add(Activation('relu')) agent.add(Dense(num_actions)) agent.add(Activation('linear')) agent.summary()
Building model
strategy = EpsGreedyQPolicy() memory = SequentialMemory(limit = 10000, window_length = 1) dqn = DQNAgent(model = agent, nb_actions = num_actions, memory = memory, nb_steps_warmup = 10, target_model_update = 1e-2, policy = strategy) dqn.compile(Adam(lr = 1e-3), metrics =['mae'])
Visualizing the training
dqn.fit(env, nb_steps = 5000, visualize = False, verbose = 2)
testing
dqn.test(env, nb_episodes = 5, visualize = False)
Have you solved it yet? I'm also working on the same problem. My gym version is v0.14.0 , my python version is 3.6.x. Are those versions over high that will forbid running it ?
I am using same versions gym==0.14.0 and python3.6. It is working fine for me.
@christopherhesse thanks for quick reply can u help me out with the following code
OBSERVATION SPACE IS GIVING ERROR
import numpy as np import gym
from keras.models import Sequential from keras.layers import Dense, Activation, Flatten from keras.optimizers import Adam
from rl.agents.dqn import DQNAgent from rl.policy import EpsGreedyQPolicy from rl.memory import SequentialMemory
Building the environment
env = make_env('simple_adversary')
np.random.seed(0) env.seed(0)
Extracting the number of possible actions
num_actions = env.action_space[0].n # first agent actions changing this vaue will be another agent print(num_actions)
Layers
agent = Sequential()
print(env.observation_space[0].shape,type(env.observation_space[0]))
agent.add(Flatten(input_shape =(1, ) + (env.observation_space[0]).shape))
agent.add(Dense(16)) agent.add(Activation('relu')) agent.add(Dense(num_actions)) agent.add(Activation('linear')) agent.summary()
Building model
strategy = EpsGreedyQPolicy() memory = SequentialMemory(limit = 10000, window_length = 1) dqn = DQNAgent(model = agent, nb_actions = num_actions, memory = memory, nb_steps_warmup = 10, target_model_update = 1e-2, policy = strategy) dqn.compile(Adam(lr = 1e-3), metrics =['mae'])
Visualizing the training
dqn.fit(env, nb_steps = 5000, visualize = False, verbose = 2)
testing
dqn.test(env, nb_episodes = 5, visualize = False)
Please use the below for loading env
import multiagent.scenarios as scenarios
scenario = scenarios.load('simple_adversary.py').Scenario()
world = scenario.make_world()
env = MultiAgentEnv(world, scenario.reset_world, scenario.reward, scenario.observation, info_callback=None, shared_viewer = False)
Now, you can proceed with the code for adding neural networks.
Hope this helps