hyperas
hyperas copied to clipboard
[Question] Running each trial on a different GPU?
Is it possible, in a multiple GPU scenario, to have each available GPU doing a separate trial? So far it seems that using multi_gpu_model is not accelerating our computer vision deep learning model (U-net / Mask RCNN), so having each trial running on a separate GPU could provide us with great speedups, but I've found no information on the matter.
Thank you.
This is something we would have to raise in hyperopt itself. It's not a simple matter, but very interesting. certainly doesn't just happen out of the box
The simplest path to getting this to work would be to use the GPU identifier as a custom hyperparameter that always returns the next value in a list using itertools.cycle(GPU_IDS). From there you'd use mongoworker and make sure there's only ever len(GPU_IDS) concurrent workers.
Something like:
import tensorflow as tf
with tf.device({{hp.cycle(['/gpu:0','/gpu:1'])}}):
...
I'm not sure the impact this would have on TPE though.