HAT
HAT copied to clipboard
How much memory needed to run inference?
I get gpu oom error when running test.py. I currently have 16G. This is not enough?
@KyriaAnnwyn What are the specific settings? GPU oom may occur when the input size is too large, especially for HAT-L on SRx2.
I tried SRx2 and SRx4 for 512x512 images. Both led to GPU OOM. CPU ran ok, but took a lot of time
@KyriaAnnwyn 512x512 is really a large input size, which may cost about 20G memory for HAT-L on SRx2. You might consider testing the image in overlapping patches then merging together for limited GPU resources.
I will test the memory requirement for the models and provide a solution for limited GPU resources for testing.
@chxy95 Thank you
The tile mode is provided for limited GPU memory when testing. The setting can be referred to https://github.com/XPixelGroup/HAT/blob/39eeb5c28741b05ed2f23f13ff9131efe7539fde/options/test/HAT_tile_example.yml#L7-L9
Hello @chxy95, I've trained a super-resolution model with a scaling factor of 1, setting the gt_size parameter to 64, despite my dataset comprising images of (512, 512) dimensions. I believe the DataLoader automatically crops these images to the specified gt_size of 64. My query pertains to the inference process using hat/test.py. Specifically, does the script perform inference on individual (64, 64) segments of the larger (512, 512) images and then stitch these segments back together to reconstruct the full (512, 512) image? Any clarification on this would be greatly appreciated.