reinforcement-learning icon indicating copy to clipboard operation
reinforcement-learning copied to clipboard

No attribute 'wrappers'

Open wonchul-kim opened this issue 8 years ago • 5 comments

In Deep Q-Learning for Atari Games(Deep Q Learning solution.py), there is some issues about version.. I guess..

When I run it, the following came up....

Populating replay memory...


Error Traceback (most recent call last) in () 31 epsilon_decay_steps=500000, 32 discount_factor=0.99, ---> 33 batch_size=32): 34 35 print("\nEpisode Reward: {}".format(stats.episode_rewards[-1]))

in deep_q_learning(sess, env, q_estimator, target_estimator, state_processor, num_episodes, experiment_dir, replay_memory_size, replay_memory_init_size, update_target_estimator_every, discount_factor, epsilon_start, epsilon_end, epsilon_decay_steps, batch_size, record_video_every) 107 """ 108 # Record videos --> 109 env.monitor.start(monitor_path, 110 resume=True, 111 video_callable=lambda count: count % record_video_every == 0)

/home/wonchul/gym/gym/core.py in monitor(self) 90 @property 91 def monitor(self): ---> 92 raise error.Error("env.monitor has been deprecated as of 12/23/2016. Remove your call to env.monitor.start(directory) and instead wrap your env with env = gym.wrappers.Monitor(env, directory) to record data.") 93 94 def step(self, action):

Error: env.monitor has been deprecated as of 12/23/2016. Remove your call to env.monitor.start(directory) and instead wrap your env with env = gym.wrappers.Monitor(env, directory) to record data.

==> So, i've changed 'env.monitor.start(directory)' to 'env = gym.wrappers.Monitor(env, directory)'. However, in this time, the following came up...

Populating replay memory...


AttributeError Traceback (most recent call last) in () 31 epsilon_decay_steps=500000, 32 discount_factor=0.99, ---> 33 batch_size=32): 34 35 print("\nEpisode Reward: {}".format(stats.episode_rewards[-1]))

in deep_q_learning(sess, env, q_estimator, target_estimator, state_processor, num_episodes, experiment_dir, replay_memory_size, replay_memory_init_size, update_target_estimator_every, discount_factor, epsilon_start, epsilon_end, epsilon_decay_steps, batch_size, record_video_every) 108 # Record videos 109 #env.monitor.start --> 110 env = gym.wrappers.Monitor(env, monitor_path, 111 resume=True, 112 video_callable=lambda count: count % record_video_every == 0)

AttributeError: 'module' object has no attribute 'wrappers'

==> so, I googled this error to solve it... but no one had answer,,, ( I found that some one solved this problem by upgrading gym,,, but I don't know how to upgrade... moreover I installed gym 5 days ago... kind of recent one...)

Could you help me out???

wonchul-kim avatar Jan 31 '17 10:01 wonchul-kim

I met the same problem

AlexZhou1995 avatar Mar 03 '17 02:03 AlexZhou1995

Encountering the same issue... hopefully somebody with more knowledge of this than me can help.

CurrenJ avatar Mar 05 '17 21:03 CurrenJ

Spoke to soon, I believe I've fixed it. Add the line:

from gym import wrappers

after

import gym

at the beginning of your atari_1step_qlearning.py file.

If you are running this in python3, change xrange to range in the py file above.

That worked for me, let me know if that solves your issue.

CurrenJ avatar Mar 05 '17 21:03 CurrenJ

It works for me. Thank you very much!

AlexZhou1995 avatar Mar 06 '17 06:03 AlexZhou1995

Thanks, it works for me.

aaron8tang avatar Oct 17 '18 15:10 aaron8tang