FinRL-Tutorials icon indicating copy to clipboard operation
FinRL-Tutorials copied to clipboard

Tutorials for FinRL and FinRL-Meta. Please star.

Practical Deep Reinforcement Learning Approach for Stock Trading

Please check the FinRL library

Now, this project is merged into the FinRL library


Python 3.6+ envrionment

Step 1: Install OpenAI Baselines System Packages OpenAI Instruction


sudo apt-get update && sudo apt-get install cmake libopenmpi-dev python3-dev zlib1g-dev

Mac OS X

Installation of system packages on Mac requires Homebrew. With Homebrew installed, run the following:

brew install cmake openmpi

Step 2: Create and Activate Virtual Environment

Clone the repository to folder /DQN-DDPG_Stock_Trading:

git clone
cd DQN-DDPG_Stock_Trading

Under folder /DQN-DDPG_Stock_Trading, create a virtual environment

pip install virtualenv

Virtualenvs are essentially folders that have copies of python executable and all python packages. Create a virtualenv called venv under folder /DQN-DDPG_Stock_Trading/venv

virtualenv -p python3 venv

To activate a virtualenv:

source venv/bin/activate

Step 3: Install openAI gym environment under this virtual environment: venv

Tensorflow versions

The master branch supports Tensorflow from version 1.4 to 1.14. For Tensorflow 2.0 support, please use tf2 branch. Refer to TensorFlow installation guide for more details.

  • Install gym and tensorflow packages:
    pip install gym
    pip install gym[atari] 
    pip install tensorflow==1.14
  • Other packages that might be missing:
    pip install filelock
    pip install matplotlib
    pip install pandas

Step 4: Download and Install Official Baseline Package

  • Clone the baseline repository to folder DQN-DDPG_Stock_Trading/baselines:

    git clone
    cd baselines
  • Install baselines package

    pip install -e .

Step 5: Testing the installation

Run all unit tests in baselines:

pip install pytest

A result like '94 passed, 49 skipped, 72 warnings in 355.29s' is expected. Check the OpenAI baselines Issues or stackoverflow if fixes on failed tests are needed.

Step 6: Test OpenAI Atari Pong game

If this works then it's ready to implement the stock trading application

python -m --alg=ppo2 --env=PongNoFrameskip-v4 --num_timesteps=1e4 --load_path=~/models/pong_20M_ppo2 --play

A mean reward per episode around 20 is expected.

Step 7: Register Stock Trading Environment under gym

Register the RLStock-v0 environment in folder /DQN-DDPG_Stock_Trading/venv: From


Copy following:


into the venv gym environment:


Step 8: Build Stock Trading Environment under gym

  • Copy folder

into the venv gym environment folder:

  • Open

change the import data path in these two files (cd into the rlstock folder and pwd to check the folder path).






Step 9: Training model and Testing


Go to folder


Activate the virtual environment

source venv/bin/activate

Go to the baseline folder



To train the model, run this

python -m --alg=ddpg --env=RLStock-v0 --network=mlp --num_timesteps=1e4


To see the testing/trading result, run this

python -m --alg=ddpg --env=RLStock-v0 --network=mlp --num_timesteps=2e4 --play

The result images are under folder /DQN-DDPG_Stock_Trading/baselines.

(You can tune the hyperparameter num_timesteps to better train the model, note that if this number is too high, then you will face an overfitting problem, if it's too low, then you will face an underfitting problem.)

Compare to our result:

Some Other Commands May Need:

pip3 install opencv-python
pip3 install lockfile
pip3 install -U numpy
pip3 install mujoco-py==0.5.7

Please cite the following paper

Xiong, Z., Liu, X.Y., Zhong, S., Yang, H. and Walid, A., 2018. Practical deep reinforcement learning approach for stock trading, NeurIPS 2018 AI in Finance Workshop.