Add the MTMD model on Alpha360
Add the MTMD model to the main branch
Description
Recently, machine learning methods have shown the prospects of stock trend forecasting. However, the volatile and dynamic nature of the stock market makes it difficult to directly apply machine learning techniques. Previous methods usually use the temporal information of historical stock price patterns to predict future stock trends, but the multi-scale temporal dependence of financial data and stable trading opportunities are still difficult to capture. The main problem can be ascribed to the challenge of recognizing the patterns of real profit signals from noisy information. In this paper, we propose a framework called Multiscale Temporal Memory Learning and Efficient Debiasing (MTMD). Specifically, through self-similarity, we design a learnable embedding with external attention as memory block, in order to reduce the noise issues and enhance the temporal consistency of the model. This framework not only aggregates comprehensive local information in each timestamp, but also concentrates the global important historical patterns in the whole time stream. Meanwhile, we also design the graph network based on global and local information to adaptively fuse the heterogeneous multi-scale information. Extensive ablation studies and experiments demonstrate that MTMD outperforms the state-of-the-art approaches by a significant margin on the benchmark datasets.
Motivation and Context
No, but all about MTMD related work/issues description can be found at https://github.com/MingjieWang0606/MTMD-Public/tree/main. Previous methods usually use the temporal information of historical stock price patterns to predict future stock trends, but the multi-scale temporal dependence of financial data and stable trading opportunities are still difficult to capture. The main problem can be ascribed to the challenge of recognizing the patterns of real profit signals from noisy information. In this paper, we propose a framework called Multiscale Temporal Memory Learning and Efficient Debiasing (MTMD). Specifically, through self-similarity, we design a learnable embedding with external attention as memory block, in order to reduce the noise issues and enhance the temporal consistency of the model. This framework not only aggregates comprehensive local information in each timestamp, but also concentrates the global important historical patterns in the whole time stream. Meanwhile, we also design the graph network based on global and local information to adaptively fuse the heterogeneous multi-scale information. Extensive ablation studies and experiments demonstrate that MTMD outperforms the state-of-the-art approaches by a significant margin on the benchmark datasets.How Has This Been Tested?
- [ ] Pass the test by running:
pytest qlib/tests/test_all_pipeline.pyunder upper directory ofqlib. - [x] If you are adding a new feature, test on your own test scripts.
Screenshots of Test Results (if appropriate):
- Pipeline test:
- Your own tests:
Types of changes
- [ ] Fix bugs
- [x] Add new feature
- [x] Update documentation
Hi @you-n-g ,
We have just pushed a significant update to our repository, featuring our latest state-of-the-art model, namely MTMD.
In addition to the MTMD model itself, we have updated our experiments to reflect the new model's capabilities. These experiments provide comprehensive insights into the model's performance and its improvements.
We would greatly appreciate your expertise and time in reviewing these changes. Your feedback is invaluable to us, and we are looking forward to any suggestions or insights you might have.
Thank you for your attention and support.
Best regards, Mingjie Wang