Synb0-DISCO icon indicating copy to clipboard operation
Synb0-DISCO copied to clipboard

High memory usage in inference.

Open toomanycats opened this issue 6 months ago • 1 comments

I've found that when running the Singularity container version of the DISCO pipeline, we needed 32 GB of memory for our Sun Grid Engine to run the pipeline.

I made a sandboxed version of the sing container and added a cache clear on a hunch. This appears to have worked. Still double checking.

def inference(T1_path, b0_d_path, model, device):
+    torch.cuda.empty_cache()
    # Eval mode
     model.eval()

toomanycats avatar Jan 03 '24 19:01 toomanycats

UPDATE:

The cache clearing didn't help. Plus the call is probably wrong since this the device is not cuda. Attempting another idea, to explicitly use float16 datatype rather than what we think is the default, float32.

toomanycats avatar Jan 08 '24 18:01 toomanycats