CoOp
CoOp copied to clipboard
training speed
Thank you for your contribution. I found that the training is slower when using multi-gpus (e.g., 8 gpus) than single gpu. Do you know why is it and how to speed up the training process?
using DistributedDataParallel can gain a significant increase in the speed but this would require big changes to the underlying Dassl package
my bad, I didn't consider DistributedDataParallel when designing Dassl
so for this CoOp code I'd highly suggest you use a single gpu