Popular-RL-Algorithms icon indicating copy to clipboard operation
Popular-RL-Algorithms copied to clipboard

PyTorch implementation of Soft Actor-Critic (SAC), Twin Delayed DDPG (TD3), Actor-Critic (AC/A2C), Proximal Policy Optimization (PPO), QT-Opt, PointNet..

Popular Model-free Reinforcement Learning Algorithms

PyTorch and Tensorflow 2.0 implementation of state-of-the-art model-free reinforcement learning algorithms on both Openai gym environments and a self-implemented Reacher environment.

Algorithms include:

  • Actor-Critic (AC/A2C);
  • Soft Actor-Critic (SAC);
  • Deep Deterministic Policy Gradient (DDPG);
  • Twin Delayed DDPG (TD3);
  • Proximal Policy Optimization (PPO);
  • QT-Opt (including Cross-entropy (CE) Method);
  • PointNet;
  • Transporter;
  • Recurrent Policy Gradient;
  • Soft Decision Tree;
  • Probabilistic Mixture-of-Experts;
  • QMIX
  • etc.

Please note that this repo is more of a personal collection of algorithms I implemented and tested during my research and study period, rather than an official open-source library/package for usage. However, I think it could be helpful to share it with others and I'm expecting useful discussions on my implementations. But I didn't spend much time on cleaning or structuring the code. As you may notice that there may be several versions of implementation for each algorithm, I intentionally show all of them here for you to refer and compare. Also, this repo contains only PyTorch Implementation.

For official libraries of RL algorithms, I provided the following two with TensorFlow 2.0 + TensorLayer 2.0:

  • RL Tutorial (Status: Released) contains RL algorithms implementation as tutorials with simple structures.

  • RLzoo (Status: Released) is a baseline implementation with high-level API supporting a variety of popular environments, with more hierarchical structures for simple usage.

For multi-agent RL, a new repository is built (PyTorch):

  • MARS (Status: WIP) is a library for multi-agent RL on games, like PettingZoo Atari, SlimeVolleyBall, etc.

Since Tensorflow 2.0 has already incorporated the dynamic graph construction instead of the static one, it becomes a trivial work to transfer the RL code between TensorFlow and PyTorch.

Contents:

  • Multiple versions of Soft Actor-Critic (SAC) are implemented.

    SAC Version 1:

    sac.py: using state-value function.

    paper: https://arxiv.org/pdf/1801.01290.pdf

    SAC Version 2:

    sac_v2.py: using target Q-value function instead of state-value function.

    paper: https://arxiv.org/pdf/1812.05905.pdf

    SAC Discrete

    sac_discrete.py: for discrete action space.

    paper (the author is actually one of my classmates at IC): https://arxiv.org/abs/1910.07207

    SAC Discrete PER

    sac_discrete_per.py: for discrete action space, and with prioritized experience replay (PER).

  • Deep Deterministic Policy Gradient (DDPG):

    ddpg.py: implementation of DDPG.

  • Twin Delayed DDPG (TD3):

    td3.py: implementation of TD3.

    paper: https://arxiv.org/pdf/1802.09477.pdf

  • Proximal Policy Optimization (PPO):

    For continuous environments, two versions are implemented:

    Version 1: ppo_continuous.py and ppo_continuous_multiprocess.py

    Version 2: ppo_continuous2.py and ppo_continuous_multiprocess2.py

    For discrete environment:

    ppo_gae_discrete.py: with Generalized Advantage Estimation (GAE)

  • Actor-Critic (AC) / A2C:

    ac.py: extensible AC/A2C, easy to change to be DDPG, etc.

    A very extensible version of vanilla AC/A2C, supporting for all continuous/discrete deterministic/non-deterministic cases.

  • DQN:

    dqn.py: a simple DQN.

  • QT-Opt:

    Two versions are implemented here.

  • PointNet for landmarks generation from images with unsupervised learning is implemented here. This method is also used for image-based reinforcement learning as a SOTA algorithm, called Transporter.

    original paper: Unsupervised Learning of Object Landmarksthrough Conditional Image Generation

    paper for RL: Unsupervised Learning of Object Keypointsfor Perception and Control

  • Recurrent Policy Gradient:

    rdpg.py: DDPG with LSTM policy.

    td3_lstm.py: TD3 with LSTM policy.

    sac_v2_lstm.py: SAC with LSTM policy.

    sac_v2_gru.py: SAC with GRU policy.

    References:

    Memory-based control with recurrent neural networks

    Sim-to-Real Transfer of Robotic Control with Dynamics Randomization

  • Soft Decision Tree as function approximator for PPO:

    sdt_ppo_gae_discrete.py: replace the network layers of policy in PPO to be a Soft Decision Tree, for achieving explainable RL.

    paper: CDT: Cascading Decision Trees for Explainable Reinforcement Learning

  • Probabilistic Mixture-of-Experts (PMOE) :

    pmoe.py: uses a differentiable multi-modal Gaussian distribution to replace the standard unimodal Gaussian distribution for policy representation.

    paper: Probabilistic Mixture-of-Experts for Efficient Deep Reinforcement Learning

  • QMIX:

    qmix.py: a fully cooperative multi-agent RL algorithm, demo environment using pettingzoo.

    paper: http://proceedings.mlr.press/v80/rashid18a.html

  • Phasic Policy Gradient (PPG):

    todo

    paper: Phasic Policy Gradient

  • Maximum a Posteriori Policy Optimisation (MPO):

    todo

    paper: Maximum a Posteriori Policy Optimisation

  • Advantage-Weighted Regression (AWR):

    todo

    paper: Advantage-Weighted Regression: Simple and Scalable Off-Policy Reinforcement Learning

Usage:

python ***.py --train

python ***.py --test

Troubleshooting:

If you meet problem "Not imlplemented Error", it may be due to the wrong gym version. The newest gym==0.14 won't work. Install gym==0.7 or gym==0.10 with pip install -r requirements.txt.

Undervalued tricks:

As we all known, there are various tricks in empirical RL algorithm implementations in support the performance in practice, including hyper-parameters, normalization, network architecture or even hidden activation function, etc. I summarize some I met with the programs in this repo here:

  • Environment specific:

    • For Pendulum-v0 environment in Gym, a reward pre-processing as (r+8)/8 usually improves the learning efficiency, as here Also, this environment needs the maximum episode length to be at least 150 to learn well, too short episodes make it hard to learn.
    • MountainCar-v0 environment in Gym has very sparse reward (only when reaching the flag), general learning curves will be noisy; therefore some specific process may also need for this environment.
  • Normalization:

    • Reward normalization or advantage normalization in batch can have great improvements on performance (learning efficiency, stability) sometimes, although theoretically on-policy algorithms like PPO should not apply data normalization during training due to distribution shift. For an in-depth look at this problem, we should treat it differently (1) when normalizing the direct input data like observation, action, reward, etc; (2) when normalizing the estimation of the values (state value, state-action value, advantage, etc). For (1), a more reasonable way for normalization is to keep a moving average of previous mean and standard deviation, to achieve a similar effect as conducting the normaliztation on the full dataset during RL agent learning (this is not possible since in RL the data comes from interaction of agents and environments). For (2), we can simply conduct normalization on value estimations (rather than keeping the historical average) since we do not want the estimated values to have distribution shift, so we treat them like a static distribution.
  • Multiprocessing:

    • Is the multiprocessing update based on torch.multiprocessing the right/safe way to parallelize the code? It can be seen that the official instruction (example of Hogwild) of using torch.multiprocessing is applied without any explicit locks, which means it can be potentially unsafe when multiple processes generate gradients and update the shared model at the same time. See more discussions here and some tests and answers. In general, the drawback of unsafe updates may be overwhelmed by the speed up of using multiprocessing (also RL training itself has huge variances and noise).

    • Although I provide the multiprocessing versions of serveral algorithms (SAC, PPO, etc), for small-scale environments in Gym, this is usually not necessary or even inefficient. The vectorized environment wrapper for parallel environment sampling may be more proper solution for learning these environments, since the bottelneck in learning efficiency mainly lies in the interaction with environments rather than the model learning (back-propagation) process.

More discussions about implementation tricks see this chapter in our book.

Performance:

  • SAC for gym Pendulum-v0:

SAC with automatically updating variable alpha for entropy:

SAC without automatically updating variable alpha for entropy:

It shows that the automatic-entropy update helps the agent to learn faster.

  • TD3 for gym Pendulum-v0:

TD3 with deterministic policy:

TD3 with non-deterministic/stochastic policy:

It seems TD3 with deterministic policy works a little better, but basically similar.

  • AC for gym CartPole-v0:

However, vanilla AC/A2C cannot handle the continuous case like gym Pendulum-v0 well.

  • PPO for gym LunarLanderContinuous-v2:

Use ppo_continuous_multiprocess2.py.

Citation:

To cite this repository:

@misc{rlalgorithms,
  author = {Zihan Ding},
  title = {Popular-RL-Algorithms},
  year = {2019},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/quantumiracle/Popular-RL-Algorithms}},
}

Other Resources:

Deep Reinforcement Learning: Foundamentals, Research and Applications Springer Nature 2020

is the book I edited with Dr. Hao Dong and Dr. Shanghang Zhang, which provides a wide coverage of topics in deep reinforcement learning. Details see website and Springer webpage. To cite the book:

@book{deepRL-2020,
 title={Deep Reinforcement Learning: Fundamentals, Research, and Applications},
 editor={Hao Dong, Zihan Ding, Shanghang Zhang},
 author={Hao Dong, Zihan Ding, Shanghang Zhang, Hang Yuan, Hongming Zhang, Jingqing Zhang, Yanhua Huang, Tianyang Yu, Huaqing Zhang, Ruitong Huang},
 publisher={Springer Nature},
 note={\url{http://www.deepreinforcementlearningbook.org}},
 year={2020}
}