FinRL_Crypto
FinRL_Crypto copied to clipboard
FinRL_Crypto: Cryptocurrency trading of FinRL
FinRL_Crypto: Address Overfitting DRL Agents for Cryptocurrency Trading

For financial reinforcement learning (FinRL), we provide a way to address the dreaded overfitting trap and increase your chances of success in the wild world of cryptocurrency trading. Our approach has been tested on 10 different currencies and during a market crash period, and has proven to be more profitable than the competition. So, don't just sit there, join us on our journey to the top of the crypto mountain!
Paper
Our paper
How to use
To reproduce the results in the paper, the codes are simplified as much as possible. You start with the settings inconfig_main.py file, where you set all the settings for:
- The Walkforward, K-Cross Validation, and Combinatorial Purged Cross Validation (CPCV) methods.
- Set how many candles/data points you require for training and validation.
- Set which tickers you will download from Binance, the minimum buy limits.
- Set your technical indicators.
- Computes automatically the exact start and end dates for training and validation, respectively, based on your trade start date and end date.
A short description of each folder:
dataContains all your training/validation data in the main folder, and a subfolder which containstrade_dataafter download using both0_dl_trainval_data.pyand0_dl_trade_data.py(more later)drl_agentsContains the DRL framework ElegantRL which implements a series of model-free DRL algorithmsplots_and_metricsDump folder for all analysis images and performance metrics producedtrainHolds all utility functions for DRL trainingtrain_resultsAfter running either1_optimize_cpcv.py/1_optimize_kcv.py/1_optimize_wf.pywill have a folder with your trained DRL agents
Then, running and producing similar results to that in the paper are simple, following the numbered Python files as indicated by the number of the filename:
0_dl_trainval_data.pyDownloads the train and validation data according toconfig_main.py0_dl_trade_data.pyDownloads the trade data according toconfig_main.py1_optimize_cpcv.pyOptimizes hyperparameters with a Combinatorial Purged Cross-validation scheme1_optimize_kcv.pyOptimizes hyperparameters with a K-Fold Cross-validation scheme1_optimize_wf.pyOptimizes hyperparameters with a Walk-forward validation scheme2_validate.pyShows insights about the training and validation process (select a results folder from train_results)4_backtestpyBacktests trained DRL agents (enter multiple results folders from train_results in a list)5_pbo.pyComputes PBO for trained DRL agents (enter multiple results folders from train_results in a list)
Simply run the scripts in the above order. Please note the trained agents are auto-saved to the folder train_results. That is where you can find your trained DRL agents!
Citing FinRL_Crypto
@article{gort2022deep,
title={Deep reinforcement learning for cryptocurrency trading: Practical approach to address backtest overfitting},
author={Gort, Berend Jelmer Dirk and Liu, Xiao-Yang and Gao, Jiechao and Chen, Shuaiyu and Wang, Christina Dan},
journal={AAAI Bridge on AI for Financial Services},
year={2023}
}