multiagent-particle-envs icon indicating copy to clipboard operation
multiagent-particle-envs copied to clipboard

cannot run interactive.py

Open SHYang1210 opened this issue 5 years ago • 10 comments

Traceback (most recent call last): File "/home/shy/桌面/multiagent-particle-envs-master/bin/interactive.py", line 6, in from multiagent.environment import MultiAgentEnv File "/home/shy/桌面/multiagent-particle-envs-master/bin/../multiagent/environment.py", line 5, in from multiagent.multi_discrete import MultiDiscrete File "/home/shy/桌面/multiagent-particle-envs-master/bin/../multiagent/multi_discrete.py", line 7, in from gym.spaces import prng ImportError: cannot import name 'prng'

SHYang1210 avatar Mar 19 '19 09:03 SHYang1210

This may be a problem with the gym version.You try to change the gym version to 0.10.5 and see if it works.

SweetPin avatar Mar 21 '19 01:03 SweetPin

From README of gym:

2019-02-06 (v0.11.0) remove gym.spaces.np_random common PRNG; use per-instance PRNG instead.

Privilger avatar Jul 14 '19 12:07 Privilger

Have you solved it yet? I'm also working on the same problem. My gym version is v0.14.0 , my python version is 3.6.x. Are those versions over high that will forbid running it ?

ZechenLiu avatar Aug 11 '19 04:08 ZechenLiu

Have you solved it yet? I'm also working on the same problem. My gym version is v0.14.0 , my python version is 3.6.x. Are those versions over high that will forbid running it ?

i can run it with the gym version at 0.10.5.

canyon avatar Aug 24 '19 10:08 canyon

change to an old version (0.10.5) is ok, you'd better change like this #53

zhaolongkzz avatar Sep 26 '19 03:09 zhaolongkzz

remove from gym.spaces import prng and replace random_array = prng.np_random.rand(self.num_discrete_space) with

random_array = np.random.RandomState().rand(self.num_discrete_space)

@christopherhesse please close

indhra avatar Apr 21 '20 10:04 indhra

This repo should probably have the correct gym version required in https://github.com/openai/multiagent-particle-envs/blob/master/setup.py#L12 instead of just choosing the latest.

christopherhesse avatar Apr 21 '20 17:04 christopherhesse

@christopherhesse thanks for quick reply can u help me out with the following code

OBSERVATION SPACE IS GIVING ERROR

import numpy as np import gym

from keras.models import Sequential from keras.layers import Dense, Activation, Flatten from keras.optimizers import Adam

from rl.agents.dqn import DQNAgent from rl.policy import EpsGreedyQPolicy from rl.memory import SequentialMemory

Building the environment

env = make_env('simple_adversary')

np.random.seed(0) env.seed(0)

Extracting the number of possible actions

num_actions = env.action_space[0].n # first agent actions changing this vaue will be another agent print(num_actions)

Layers

agent = Sequential()

print(env.observation_space[0].shape,type(env.observation_space[0]))

agent.add(Flatten(input_shape =(1, ) + (env.observation_space[0]).shape))

agent.add(Dense(16)) agent.add(Activation('relu')) agent.add(Dense(num_actions)) agent.add(Activation('linear')) agent.summary()

Building model

strategy = EpsGreedyQPolicy() memory = SequentialMemory(limit = 10000, window_length = 1) dqn = DQNAgent(model = agent, nb_actions = num_actions, memory = memory, nb_steps_warmup = 10, target_model_update = 1e-2, policy = strategy) dqn.compile(Adam(lr = 1e-3), metrics =['mae'])

Visualizing the training

dqn.fit(env, nb_steps = 5000, visualize = False, verbose = 2)

testing

dqn.test(env, nb_episodes = 5, visualize = False)

indhra avatar Apr 23 '20 07:04 indhra

Have you solved it yet? I'm also working on the same problem. My gym version is v0.14.0 , my python version is 3.6.x. Are those versions over high that will forbid running it ?

I am using same versions gym==0.14.0 and python3.6. It is working fine for me.

AmulyaReddy99 avatar May 11 '20 12:05 AmulyaReddy99

@christopherhesse thanks for quick reply can u help me out with the following code

OBSERVATION SPACE IS GIVING ERROR

import numpy as np import gym

from keras.models import Sequential from keras.layers import Dense, Activation, Flatten from keras.optimizers import Adam

from rl.agents.dqn import DQNAgent from rl.policy import EpsGreedyQPolicy from rl.memory import SequentialMemory

Building the environment

env = make_env('simple_adversary')

np.random.seed(0) env.seed(0)

Extracting the number of possible actions

num_actions = env.action_space[0].n # first agent actions changing this vaue will be another agent print(num_actions)

Layers

agent = Sequential()

print(env.observation_space[0].shape,type(env.observation_space[0]))

agent.add(Flatten(input_shape =(1, ) + (env.observation_space[0]).shape))

agent.add(Dense(16)) agent.add(Activation('relu')) agent.add(Dense(num_actions)) agent.add(Activation('linear')) agent.summary()

Building model

strategy = EpsGreedyQPolicy() memory = SequentialMemory(limit = 10000, window_length = 1) dqn = DQNAgent(model = agent, nb_actions = num_actions, memory = memory, nb_steps_warmup = 10, target_model_update = 1e-2, policy = strategy) dqn.compile(Adam(lr = 1e-3), metrics =['mae'])

Visualizing the training

dqn.fit(env, nb_steps = 5000, visualize = False, verbose = 2)

testing

dqn.test(env, nb_episodes = 5, visualize = False)

Please use the below for loading env

import multiagent.scenarios as scenarios
scenario = scenarios.load('simple_adversary.py').Scenario()
world = scenario.make_world()
env = MultiAgentEnv(world, scenario.reset_world, scenario.reward, scenario.observation, info_callback=None, shared_viewer = False)

Now, you can proceed with the code for adding neural networks.

Hope this helps

AmulyaReddy99 avatar May 11 '20 12:05 AmulyaReddy99