DeepLearningFlappyBird
DeepLearningFlappyBird copied to clipboard
Fixed TypeError when retraining
The error 'TypeError: Population must be a sequence or set. For dicts, use list(d)' sometimes occurred, so I made this small fix for next users who might play with it.
Can you describe your environment? I never see TypeError: Population must be a sequence or set
Sure,
- Ubuntu 14.04
- Tensorflow 0.8
- Python 3.4
It looks like this:
...
TIMESTEP 10000 / STATE observe / EPSILON 0.1 / ACTION 0 / REWARD 0.1 / Q_MAX -1.201076e-03
TIMESTEP 10001 / STATE explore / EPSILON 0.1 / ACTION 0 / REWARD 0.1 / Q_MAX -1.623187e-03
Traceback (most recent call last):
File "deep_q_network.py", line 215, in <module>
main()
File "deep_q_network.py", line 212, in main
playGame()
File "deep_q_network.py", line 209, in playGame
trainNetwork(s, readout, h_fc1, sess)
File "deep_q_network.py", line 153, in trainNetwork
minibatch = random.sample(D, BATCH)
File "/home/cave/anaconda3/envs/tensorflow/lib/python3.4/random.py", line 311, in sample
raise TypeError("Population must be a sequence or set. For dicts, use list(d).")
TypeError: Population must be a sequence or set. For dicts, use list(d).
Big thanks for your report.
What I'm thinking is that D here is a deque instead of a dicts, so I don't know whether it's the right way to fix.
Also, we need to know what cause it happen.
Can you provide the parameter you are using? e.g. OBSERVE, EXPLORE
I just followed 'How to reproduce?'. So have exactly this same parameters as you wrote. But I have some problems, most of a time the bird is going up and can't go through pipes. That might be related.
But I have some problems, most of a time the bird is going up and can't go through pipes.
That's normal. You should leave it run in the background and check it tomorrow 😈 (make sure you are using a GPU)
That might be related.
No, I don't think so. There may be something worng when the system switch from OBSERVE state to EXPLORE state since you crashed at TIMESTEP 10001.
Tomorrow? Okey:balloon: All the time when it switch from OBSERVE to EXPLORE I have this same error..
Nice! So I think we find the problem. I will have a closer look into this tomorrow since I am currently relatively busy.
BTW, why the bird keeps going up and can't go through the pipes is because it still got high probability to choose a random action. And since the FPS of the game is 30, i.e. agent can make an action every 0.03s, it has a high probability to choose jump, which results in what you observed.
However, after some epochs of training, it will keep sampling those bad memories to fix its behavior. And when the ϵ finally anneals down to some relatively low value, agent will start to follow the policy it learned. (no more random action!)
At that time, you can see the bird flying like a ninja.
Hope this helps! 🍻
Hey @szymonk92 , how's it going?
Any updates? @szymonk92
It's fine, but I had to add list(D) to make it work. Did you test your version on Python 3.4?
@yenchenlin For python versions below 3.5, random.sample() cannot take deque as argument. As @szymonk92 mentioned, use list(D) as a fix for python version below 3.5.
I would suggest put list(D) in the code as it works for most versions of python.
I had the same issue in python 3.4; when explore mode transitioned to observe there was a bug. By making deque, D a list the issue was fixed. I will retract my pull request