BasicSR
BasicSR copied to clipboard
out of memory on testing
I set the batch size to 4 during training and it works fine. However, when I tried to test, it resulted in "out of memory" CUDA errors. And I can't find any settings to reduce the amount of VRAM usage. Will it be effective to modify "num_feat" and "num_block"
You probably don't have enough video memory, I had the same problem when I was testing the EDSR network on Div2K data sets (I was using 10G video memory).Both the Torch environment and the model run itself take up a lot of video memory, which can happen if you load too much data.
how to slove it