qlib
qlib copied to clipboard
How to use multi GPUs to train the model
When I was training the GATs model using all the Chinese stock market data (by setting instrument = 'all'), it turns out an error 'RuntimeError: CUDA out of memory'. Actually I have two GPUs but only one was using when I did the training. How can I use multi GPUs to do the training?
Thanks for the help in advance.
Hello, I am facing the same problem. Have you solved it:)
Qlib provides a framework for quant research. But it does not manage the computation resource. You integrate https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html or something else into https://github.com/microsoft/qlib/blob/main/qlib/contrib/model/pytorch_gats.py to fully leverage your resource.
This issue is stale because it has been open for three months with no activity. Remove the stale label or comment on the issue otherwise this will be closed in 5 days