NoisyNetDQN
NoisyNetDQN copied to clipboard
Tensorflow implementation for "Noisy network for exploration"
NoisyNetDQN
A TensorFlow implementation of "Noisy network for exploration".
The Q-value function is implemented with 2 convolution layers and 3 fully connected layers, and I use the atari game Breakout-v0 for the test.
If you are doing test on the CartPole or some other games which the state is 1 dimension, there network of Q-value function should only have dense layers.
As a comparison, you can see the implementation of DQN in DQN.py.
The feedback after an action contains 4 parts:
state, real-reward, game_over, lives_rest
There are 4 actions in the game Breakout-v0:
0: hold and do nothing
1: throw the ball
2: move right
3: move left