Recoder
Recoder copied to clipboard
RuntimeError: CUDA out of memory
Hi there, thank you very much for open-sourcing the work!
I wonder what devices you used for the work. Since I tried to run the training in a machine with 8 Tesla V100-SXM2-16GB, but cannot make it. Besides, I found the code would only utilize 2 GPUs, although I did not specify. I modified the device setting inside run.py
, but still cannot change the fact that only 2 GPUs are used.
Please kindly suggest. Thank you in advance!
Hi @pkuzqh , I've got another issue when running the code.

If you want to change the batch size, you need to change the number in the dict "args". If you want to use multiple GPUs, you need to modify "model = nn.DataParallel(model, device_ids=[0, 1])".
Hi @pkuzqh, thank you for the reply. The cuda out of memory
issue has been resolved. However, I found the new error above. Please kindly suggest, thanks.
How many GPUs do you use? And the batch size?
3, I indicated in the train()
that: device_ids=[1,2,3]
the batch size is 16
You need to change the number "4" in line 103-106 to a multiple of 3. And the batch size needs to be a multiple of 3.
OK, thank you very much @pkuzqh ! It can now run. However, I saw in the train()
, the number of epochs is 100000
, for epoch in range(100000):
is that true?
BTW, for inference, it looks like the testDefect4j.py
can only use 1 GPU? Since I have 4 GPUs, only one was used, and it caused an OOM issue.
you can use "nn.DataParallel" to use multiple gpus in testDefect4J.py