pytorch-semseg
pytorch-semseg copied to clipboard
RuntimeError: CUDA error: out of memory
When I try to train fcn8s_pascal.py, it always appears this issue. But if I train segnet_pacal.py, it works normally. Can anyone help me? I didn't change the .yml file provided by author.
Hi,Have you solved the problem?
Hi,Have you solved the problem?
Some others had also met the same mistake when they were trying to train fcn on pytorch. I found two solutions. The first one is to modify the input to 224*224 which is same with the original paper, and the training process works normally. However, I can't test any images because CUDA error appeared again. Here is the link of the second solution:https://github.com/zijundeng/pytorch-semantic-segmentation/issues/37, and I am still trying to change my code.
Hi,Have you solved the problem?
Some others had also met the same mistake when they were trying to train fcn on pytorch. I found two solutions. The first one is to modify the input to 224*224 which is same with the original paper, and the training process works normally. However, I can't test any images because CUDA error appeared again. Here is the link of the second solution:zijundeng/pytorch-semantic-segmentation#37, and I am still trying to change my code.
Have you solved this problem?