vidar icon indicating copy to clipboard operation
vidar copied to clipboard

GPU memory of ZeroDepth

Open baibizhe opened this issue 2 years ago • 3 comments

Hello. I am try to inference with zerodepth_model = torch.hub.load("TRI-ML/vidar", "ZeroDepth", pretrained=True, trust_repo=True) However , it is only possible if I resize input image to a extreme small size.For example, 144,256. If the image size is 640x360. There will be a OOM of GPU. I run all my experiments on A100 40G., Is this normal?

Best regards, Bizhe

baibizhe avatar Sep 05 '23 14:09 baibizhe

I'm seeing this same issue

Have you found a solution?

mrussell9 avatar Sep 14 '23 15:09 mrussell9

I'm seeing this same issue

Have you found a solution?

Not yet

baibizhe avatar Sep 16 '23 12:09 baibizhe

I was able to run inference on a GPU with less VRAM than yours with

with torch.no_grad():
    depth_pred = zerodepth_model(rgb, intrinsics)

mrussell9 avatar Sep 18 '23 19:09 mrussell9