CrossPoint-DDP
CrossPoint-DDP copied to clipboard
Gather step absent (?)
Hello.
I am currently using your code for an internship I am working on and noticed something about it.
In the training loop, you seem to use the NTXentLoss function from lightly, the same as the original implementation.
I tried looking into the documentation of the loss function and discovered the 'gather_distributed' argument when creating the loss object (cf. https://docs.lightly.ai/self-supervised-learning/lightly.loss.html#lightly.loss.ntx_ent_loss.NTXentLoss).
The documentation indicates that this argument (if True) is used to gather negatives from other GPUs in the case of distributed learning.
The default value of the argument is False and it isn't modified in the code, so the gather step is never made. Is it intentional or an oversight?