semantic-segmentation-pytorch icon indicating copy to clipboard operation
semantic-segmentation-pytorch copied to clipboard

Training with 2 GPUs is much slower than with 1 GPU

Open ivanlado opened this issue 1 year ago • 0 comments

I compared the time it took to train the models using 2 GPUs vs. using 1 GPU, an the result was that training with 2 GPUs is much slower. In fact, training with 2 GPUs takes at least two times the amount of time compared to using a 1GPU. What is happening? What is wrong?

I have looked the messages displayed after every iteration, an although "data" time does not vary with respect to the single GPU case, the "time" time is at least twice bigger in the 2 GPUs case.

  • "data" time: The time it takes to load the data.
  • "time" time: The time it take to do a whole iteration, including loading the data, forward and backward props. Disclaimer: These confusing terms are the ones uses in the code.

The comparisons have been made using the same hardware configurations.

ivanlado avatar Oct 02 '24 18:10 ivanlado