DQN_DDQN_Dueling_and_DDPG_Tensorflow
DQN_DDQN_Dueling_and_DDPG_Tensorflow copied to clipboard
Tensorflow + OpenAI Gym implementation of Deep Q-Network (DQN), Double DQN (DDQN), Dueling Network and Deep Deterministic Policy Gradient (DDPG)
[2018-04-23 15:16:46,849] Making new env: Seaquest-v0 Traceback (most recent call last): File "gym_dqn_atari.py", line 58, in env = AtariEnvWrapper(ENV_NAME) File "/home/tanggy/Downloads/DQN_DDQN_Dueling_and_DDPG_Tensorflow-master/modules/env_utils.py", line 32, in __init__ self.monitor = self.env.monitor AttributeError: 'TimeLimit'...
I found the codes could not work with late gym versions, e.g., gym 0.6.0 could not work, while 0.5.6 could work. In terms of tensorflow versions, I found the codes...
Hi, spiglerg! Thank you for replying to me so soon. However, I wonder whether removing batch normalization will significantly hurt performance for I want to test the code in "Reacher"...