Comparison of GPUDirect ( GPU Memory)- NIC access vs Hop Memory GPU-> Host Memory -> NIC
Currently perftest supports GPU Direct support where NIC can directly access GPU memory , but it would be good to have comparison it without GPU Direct i.e. GPU Memory -> Copied to Host Memory -> NIC . Can someone give pointer how to make this change. what i think we need to allocate host memory and copy gpu memory using cuMemcpyDtoH , then this host memory need to be used for MR?
Hi @alokprasad, do you mean to perform the copies in the datapath?
@sshaulnv yes..that would give insight on the improvement achieved by gpudurect
To ensure optimal bandwidth, we generally avoid performing intensive operations within the datapath. Assuming GPUDirect is unavailable and we need to send a message from GPU memory, we would probably first copy the buffer to host memory before entering the datapath.
@sshaulnv i agree thats good solution if we have constant data..but consider a scenario that Host 1 sends GPU data to Host 2 and it recives back Host2 does some processing , we need to do constant copying host mem-gpu mem in data path.
@alokprasad IMO implementing a staging data path in perftest does not make much sense. It is much easier and meaningful to leverage UCX or a GPU-aware MPI library + OSU MPI benchmark. Btw a number of papers have done that already.
@drossetti I got the point, can you please point me to the papers , hopefully with some github links to checkout the code.
To ensure optimal bandwidth, we generally avoid performing intensive operations within the datapath. Assuming GPUDirect is unavailable and we need to send a message from GPU memory, we would probably first copy the buffer to host memory before entering the datapath.
Still wondering if perftest has option to use GPU memory but without dmabuf or nvpeermem.