CCNet
CCNet copied to clipboard
Deadlock at loss.backward()
Hello, thank you for sharing your impressive work! However, I got an issue on running your code. Deadlock always occurs at loss.backward() of second iteration(batch index 1). I think this is multiprocessing issue, isn't it? My torch version is 0.4.1 and python version is 3.6.1 I hope this issue be resolved soon!
The issue may occur when the machine is out of GPU memory. You can reduce the input size and try again.