rl_lib
rl_lib copied to clipboard
Series of deep reinforcement learning algorithms 🤖
rl_lib
Motivation: I have always thought that the only way to truely test if you understand a concept is to see if you can build it. As such all these these algorithms are implemented studying the relevant papers and coded to test my understanding.
What I cannot create, I do not understand” - Richard Feynman
Algorithms
DQN
- Vanilla DQN
- Noisy DQN
- Dualing DQN
- Double DQN
- Prioritiesed Experience Replay DQN
- Rainbow DQN
Policy Gradient
- Advantage Actor Critic (A2C) - single environment
- Advantage Actor Critic (A2C) - multi environment
- Deep Deterministic Policy Gradients
- Proximal Policy Optimisation (discrete and continuous)
Tabular Solutions
These were mainly referenced from a really good lecture series by Colin Skow on youtube [link]. A large part was also found in the Deep Reinforcement Learning Udacity course.
- Bellman Equation
- Dynamic Programming
- Q learning
Associated Articles
- Tutorial on PPO: A Graphic Guide to Implementing PPO for Atari Games
Results
DQN Pong
- Converged to an average of 17.56 after 1300 Episodes.
- Code and results can be found under
DQN/7. Vanilla DQN Atari.ipynb
DDPG Continuous
- Converged to ~ -270 after a 100 episodes
- Code and results can be found under
Policy Gradient/4. DDPG.ipynb.ipynb
PPO discrete
- Solved in 409 episodes
- Code and results can be found under
Policy Gradient/5. PPO.ipynb
PPO Atari - with Baseline Enhancements
- Code and results can be found under
PPO/
Todo
- Curiousity Driven Exploration
- HER (Hindsight Experience Replay)
- Recurrent networks in PPO and DDPG
Credits
Whilst I tried to code everything directly from the papers, it wasn't always easy to understand what I was doing wrong when the algorithm just wouldn't train or I got runtime errors. As such I used the following repositories as references.