RL-ViGen
RL-ViGen copied to clipboard
About parallel training
Hello, authors, Thanks for your excellent work, it is really helpful for the community. But I am confused about how can we achieve parallel training (it is actually not an issue of this repo). For example, the image size of dm_control suite is about 84*84, and when I train one exp, the used GPU memory is quite small but the GPU-Util is high. If I manually train several seeds at the same time, every training process will be slow. So my question is that how can I achieve parallel training to accelerate the process (multiprocessing?).