Deep-Reinforcement-Learning-in-Trading icon indicating copy to clipboard operation
Deep-Reinforcement-Learning-in-Trading copied to clipboard

Refactor the agents to Pytorch, update TA-Gen.py by TA-Gen, restructure main.py

Open labrinyang opened this issue 1 year ago • 3 comments

Refactor the code in double_dqn.py, dqn.py, and duelling_dqn.py for PyTorch compatibility, integrate ta-lib into TA-Gen.py(for stockstats is not work yet). And restructure main.py for enhanced clarity.

labrinyang avatar Nov 18 '23 05:11 labrinyang

This repository has been a popular resource for beginners learning to use reinforcement learning in quantitative trading. However, the code hasn't been updated for a considerable time. I've refactored the agents into a PyTorch version to improve their universality and stability. Additionally, I've replaced the stockstats package used in TA-Gen.py with TA-Lib, as TA-Lib is more widely used and user-friendly. I've also restructured main.py to enhance its clarity. While I have thoroughly reviewed my code, it may still not fully meet the repository's standards. If any issues arise, please inform me, and I will address them promptly. Finally, I would like to express my gratitude for your code, which has introduced me to the world of reinforcement learning in quantitative trading.😁🎶

labrinyang avatar Nov 18 '23 06:11 labrinyang

Thanks @2665477495 for doing this and I am happy that you found this useful. A few thoughts on the natural next steps for the project.

  1. Open-Ai Gym is obsolete and the latest updates are coming into its fork Gymnasium (https://gymnasium.farama.org/). So I would port the environment to that framework which would offer you more features.
  2. Instead of developing agents yourself, you a framework like SB3 (https://stable-baselines3.readthedocs.io). Now if you are inclined on developing the models for the sake of learning, it makes sense to develop it from scratch. But a fair warning, after DDQN, the models can get quite complex. Look at the implementation of something like A2C and PPO and then you can understand how daunting it can be. Therefore, sticking to something like SB3 would help u focus on framing the problem rather than the implementation nightmare of the models.

saeed349 avatar Nov 18 '23 07:11 saeed349

Thanks @2665477495 for doing this and I am happy that you found this useful. A few thoughts on the natural next steps for the project.

  1. Open-Ai Gym is obsolete and the latest updates are coming into its fork Gymnasium (https://gymnasium.farama.org/). So I would port the environment to that framework which would offer you more features.
  2. Instead of developing agents yourself, you a framework like SB3 (https://stable-baselines3.readthedocs.io). Now if you are inclined on developing the models for the sake of learning, it makes sense to develop it from scratch. But a fair warning, after DDQN, the models can get quite complex. Look at the implementation of something like A2C and PPO and then you can understand how daunting it can be. Therefore, sticking to something like SB3 would help u focus on framing the problem rather than the implementation nightmare of the models.

Thank you for your advice😋. I will consider building upon SB3 and focus on conceptualizing the idea rather than dealing with complex coding.🍻 Your guidance has helped me move away from rigid frameworks that require coding every aspect of the model from scratch. Additionally, I plan to adapt the code for Gymnasium.

labrinyang avatar Nov 18 '23 08:11 labrinyang