FinRL_Crypto
FinRL_Crypto copied to clipboard
How to improve this model's train effeciency on GPU
I tried to run the part of cpcv of this model on Google Colab Pro+ with A100 GPU but it only uses 1.7GB over 40GB GPU RAM. No matter how I changed the batch size, worker_num of the GPU, thread_num of the CPU. It seems to me that there is a bottleneck that restrains the GPU performance and train speed. Can anyone help with this problem?