dqn_zoo
dqn_zoo copied to clipboard
MoG-DQN not performing well
Hello, thanks for all the cool implementations.
I was specifically interested in the MoG-DQN, however, running your implementation, it seems that it does not manage to learn the simplest CartPole problem, even after thousands of episodes, whereas the standard DQN algorithm manages to consistently score the maximum possible points after about 450 episodes.
Is there something wrong with the implementation? I've been trying to implement the MoG-DQN for about a month myself, now, and all my attempts were unsuccessful. That's why I wanted to try your implementation out to see what I'm doing wrong, but it appears that your implementation does not work either. This is very surprising to me because in the paper, MoG-DQN appears to be able to learn even Atari games, whereas in my experience it does not even work on the CartPole problem. How come?