IA-SSD
IA-SSD copied to clipboard
about gpu memory
Hi, I also wonder the memory in your table6. I set batchsize=1 when inference in V100, but I get the memory is 1800+M, how can i get the 102M or near 120M result?
The memory footprint here is calculated by averaging since our work can process 100 frames in parallel on an RTX2080Ti (11GB total except for the initial memory 800 MB for Pytorch 1.1 version). Thus It can be roughly calculated by (11000-800)/100 = 102.
Hello, thanks for your work, I would like to ask how to process 100 frames in parallel, which means batch_size can be set to 100 during inference?