gnn-re-ranking icon indicating copy to clipboard operation
gnn-re-ranking copied to clipboard

CUDA out of memory

Open 731894915 opened this issue 3 years ago • 5 comments

Hi, first of all, thanks for releasing your CUDA operator for reranking. However, I encountered memory allocation problems when dealing with large matrices which require more than 40GB VRAM. Is that possible for you to release the CPU version of GNN re-ranker mentioned in your paper? That would save us a lot of time from re-implementing the whole module.

731894915 avatar Apr 06 '21 05:04 731894915

Hi, @731894915 In my experiments, I didn't consume so much VRAM. Could you please provide more details?

Xuanmeng-Zhang avatar Apr 28 '21 03:04 Xuanmeng-Zhang

Hi @731894915 You may also try the low precision, such as float16, to reduce 40GB to 20GB. In our experiment, fp16 will not compromise the performance too much.

layumi avatar Apr 28 '21 11:04 layumi

Hi, @Xuanmeng-Zhang @layumi. Thanks for your reply. The issue comes during the testing on MSMT17, which has 93820 images in the query and gallery set.

File "..../gnn_reranking/gnn_reranking.py", line 40, in gnn_reranking A = build_adjacency_matrix.forward(initial_rank.float()) RuntimeError: CUDA out of memory. Tried to allocate 32.79 GiB (GPU 0; 10.76 GiB total capacity; 144.61 MiB already allocated; 9.61 GiB free; 178.00 MiB reserved in total by PyTorch)

From the source code, I found that it requires building a 93820 x 93820 matrix. This matrix takes 93820 * 93820 * 4 / (1024^3) = 32.79G VRAM. Since I am using a single RTX 2080Ti with 11GB VRAM, it might still not work even if I choose fp16.

And it also seems that the adjacent matrix cannot be chucked into multiple smaller ones.

731894915 avatar Apr 29 '21 02:04 731894915

This problem also bothers me. Is there any solution for solving "CUDA out of memory" by constructing the adjacent matrix?

wang-zm18 avatar Jul 20 '21 12:07 wang-zm18

Thanks @731894915 and @wang-zm18. We have discussed a lot on this problem. But for the time being, it is quite tricky to optimise it since the output should be float as well.

We also have try the sparse matric in pytorch, but it also need to be dense to conduct multiply.

@Xuanmeng-Zhang Do you have any new idea about this? uint8 ? or any other solution by running on the cpu partly (large cpu memory may be needed instead)?

layumi avatar Jul 21 '21 16:07 layumi

Hi, I was encountering a similar problem when I combine the query and gallery sets in Market1501 and try to compute a similarity matrix of 38562*38562. I do have enough GPU memory but there comes the "illegal memory access" problem. However, when the size of the matrix is reduced, there is no such problem. I guess it is because when the input matrix is large, the computation on the GPU is unstable and makes some mistakes.

ycl54 avatar Nov 04 '22 07:11 ycl54