SparseBEV
SparseBEV copied to clipboard
Cuda out of memory
I run inference use single A100 40G, at the begining it will work fine but the cuda out of memory will occur , could you please give me some advice to avoid the GPU usage growing up?
I notice when inference tasks per second will grow, can I fix this to reduce the memory?
Sorry for the late reply. I optimized the code and now you should be able to evaluate vov99 model on your GPU. By the way, you can further boost the speed by setting self.fp16_enabled = True in simple_test_online.