ESAM
ESAM copied to clipboard
Training in 8GPU with lower performance
Thanks for your great work and your open-sourced code, I have tried to train the configs/ESAM-E_CA/ESAM-E_sv_scannet200_CA.py on 1GPU and 8GPU respectively with lr=1e-4 and 8e-4 respectively.
Here are the results:
1 GPU with lr=1e-4
+---------+---------+---------+--------+
| classes | AP_0.25 | AP_0.50 | AP |
+---------+---------+---------+--------+
| object | 0.8941 | 0.7831 | 0.5758 |
+---------+---------+---------+--------+
| Overall | 0.8941 | 0.7831 | 0.5758 |
+---------+---------+---------+--------+
8GPU with lr=8-4
+---------+---------+---------+--------+
| classes | AP_0.25 | AP_0.50 | AP |
+---------+---------+---------+--------+
| object | 0.8853 | 0.7684 | 0.5508 |
+---------+---------+---------+--------+
| Overall | 0.8853 | 0.7684 | 0.5508 |
+---------+---------+---------+--------+
Do you have any suggestions for maintaining the performance when training on multiple gpus to accelerate the training speed?
Hi, Thanks for your interest! We are sorry that we have not tried to train ESAM with more than 4 GPUs. I think you can try more learn rates and fix other hyperparameters.