Pytorch-RL-CPP
Pytorch-RL-CPP copied to clipboard
A Repository with C++ implementations of Reinforcement Learning Algorithms (Pytorch)
Pytorch-RL-CPP

A Repository with C++ implementations of Reinforcement Learning Algorithms (Pytorch)

RlCpp is a reinforcement learning framework, written using the PyTorch C++ frontend.
RlCpp aims to be an extensible, reasonably optimized, production-ready framework for using reinforcement learning in projects where Python isn't viable. It should be ready to use in desktop applications on user's computers with minimal setup required on the user's side.
The Environment used is the C++ Port of Arcade Learning Environment
Currently Supported Models
The deep reinforcement learning community has made several independent improvements to the DQN algorithm. This repository presents latest extensions to the DQN algorithm:
- Playing Atari with Deep Reinforcement Learning [arxiv]
- Deep Reinforcement Learning with Double Q-learning [arxiv]
- Dueling Network Architectures for Deep Reinforcement Learning [arxiv]
- Prioritized Experience Replay [arxiv]
- Noisy Networks for Exploration [arxiv]
- A Distributional Perspective on Reinforcement Learning [arxiv]
- Rainbow: Combining Improvements in Deep Reinforcement Learning [arxiv]
- Distributional Reinforcement Learning with Quantile Regression [arxiv]
- Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation [arxiv]
- Neural Episodic Control [arxiv]
Results for Pong using Double DQN

Environments (All Atari Environments)
- Breakout
- Pong
- Montezuma's Revenge (Current Research)
- Pitfall
- Gravitar
- CarRacing
Installing the dependencies
Arcade Learning Environment
Install main dependences:
sudo apt-get install libsdl1.2-dev libsdl-gfx1.2-dev libsdl-image1.2-dev cmake
Compilation:
$ mkdir build && cd build
$ cmake -DUSE_SDL=ON -DUSE_RLGLUE=OFF -DBUILD_EXAMPLES=ON ..
$ make -j 4
To install python module:
$ pip install .
or
$ pip install --user .
Getting the ALE to work on Visual Studio requires a bit of extra wrangling. You may wish to use IslandMan93's Visual Studio port of the ALE.
To ask questions and discuss, please join the ALE-users group.
Libtorch
Building
CMake is used for the build system.
Most dependencies are included as submodules (run git submodule update --init --recursive
to get them).
Libtorch has to be installed seperately.
cd Reinforcement_CPP
cd build
cmake ..
make -j4
Before running, make sure to add libtorch/lib
to your PATH
environment variable.
Changes to cmake file
The CMake file requires some changes for things to run smoothly.
- After building ALE, link the libale.so
- Set torch dir, after building libtorch. Refer to the current CMakeLists.txt and make the relevant changes.
Future Plans
Plans to support:
- Runtime differences between C++ and Python.
- Python bindings for the Trainer module.
- More models and methods.
- Support for mujoco environment.
Stay tuned !