Deep_reinforcement_learning_Course
Deep_reinforcement_learning_Course copied to clipboard
Deep Q Learning Spaceinvaders
I've trained the model for 50 total episodes. However, when I run the last code cell, the action is always the same. I've printed Qs and the action, and the action is always [0 0 0 0 0 0 1 0]. The agent never moves and just dies after 3 lives.
I tested the environment with: (Basically selects a random action) choice = np.random.rand(1,8) choice = choice[0] choice.tolist() choice = np.argmax(choice) print(choice) action = possible_actions[choice]
and the environment renders and the agent dies at around 200 points. So my installation is fine.
Any idea what I'm doing wrong?
I also logged more information on the training. The actions during training are different (agent is trying all the possible actions). Here is the information for the first 2 episodes:
Episode: 0 Total reward:: 50.0 Explore P: 0.9880 Training Loss 2.5707 Episode: 1 Total reward:: 110.0 Explore P: 0.9673 Training Loss 238.0061
After my second training attempt, the agent only performs [1 0 0 0 0 0 0 0].
Why is the agent only repeating one action when during training it is trying all the different actions?
LOL, third attempt and now it is only generating [0 0 1 0 0 0 0 0]. Is there something wrong with the inference?
@noobmaster29, how did you solve the problem mentioned above, i met with the same problem.
I`m having the same problem. the agent always chooses the first action until it dies.
space invaders environment actions sample returns something like this. array([0, 1, 0, 1, 1, 1, 1, 0], dtype=int8) so i think taking the argmax during the training is not correct.
@xiongsenlin No unfortunately, I have not been able to resolve the issue.
@HemaZ Argmax should be correct. It takes the highest Q value and turns that action into 1 and everything else is 0. I'm not sure why there is more than one 1 in your action array.
I'm in the same boat. Also the model that comes with the project, which I'm assuming is pre-trained, also does not move.
Good tip on trying out the random agent and seeing how it performed.
Yeah, I tried loading the pre-trained network but the agent still doesn't work. Maybe the author could help on getting the notebook to work.
actually, it's working for me now. but I don't remember what change I've made. maybe I've trained it for a little bit longer. check my implementation and weights. https://github.com/HemaZ/Deep-Reinforcement-Learning/tree/master/DQN
I'll give it another shot.
The problem is not enough trained network. The only valid actions are: 0 - fire, 6 - left, 7 - right. So if you got action [0 0 0 0 0 0 1 0] that is actually left action and agent never moves because it's at the left corner.
I'll give it another shot.
Hey Nathan have you tried again?
I did! Nothing came out of it. However, when I tried out the keras-rl library, which incorporated dueling double dqn, we had some pretty good results.
You mean for space invaders? Will you open a repository for it
Any News here ?
It looks like that the model only do somthing while training because of the rnd picking actions ni the predict_action function.
if i test the trained model nothing happens. (i trained it for > 50 episode)