Synb0-DISCO
Synb0-DISCO copied to clipboard
High memory usage in inference.
I've found that when running the Singularity container version of the DISCO pipeline, we needed 32 GB of memory for our Sun Grid Engine to run the pipeline.
I made a sandboxed version of the sing container and added a cache clear on a hunch. This appears to have worked. Still double checking.
def inference(T1_path, b0_d_path, model, device):
+ torch.cuda.empty_cache()
# Eval mode
model.eval()
UPDATE:
The cache clearing didn't help. Plus the call is probably wrong since this the device is not cuda
.
Attempting another idea, to explicitly use float16
datatype rather than what we think is the default, float32
.
I am closing the older bug reports as these were missed. We are now better tracking reports across the organization. Please re-open if this continues to be a blocker.