ENAS-pytorch
ENAS-pytorch copied to clipboard
gpu_nums> 1
If want to run on multi gpu, when self.shared is forwarding , should use Modulelist's data (like self._w_h(which is a type of ModuleList)). Otherwise will raise an error :( RuntimeError: tensors are on different GPUs) , beacuse when self.forward(xx), the parameter are used stored in list data structure, and would not replicate to another gpu.