CrossPoint-DDP icon indicating copy to clipboard operation
CrossPoint-DDP copied to clipboard

PyTorch DistriubtedDataParallel (DDP) implementation of the CVPR 2022 Paper CrossPoint.

Results 4 CrossPoint-DDP issues
Sort by recently updated
recently updated
newest added

Hello. I am currently using your code for an internship I am working on and noticed something about it. In the training loop, you seem to use the NTXentLoss function...

Hi,Thank you very much for your efforts, but when I ran it with three graphics cards, it took a very long time to run, and the GPU utilization did not...

Hello, Jerry Sun. Thank you for the sharing of your good implementation of DDP training for CrossPoint. When I was conducting the training, I met the issue: work = default_pg.allgather([tensor_list],...

Excuse me, when I was conducting distributed training, the log kept outputting "DEBUG SenderThread: 1236909 [sender. py: send(): 182] send: stats", and finally reported an RuntimeError: Timed out initializing process...