EfficientNet-PyTorch
EfficientNet-PyTorch copied to clipboard
Consider if user has GPU/CPU while calling `torch.load()`
https://github.com/lukemelas/EfficientNet-PyTorch/blob/7e8b0d312162f335785fb5dcfa1df29a75a1783a/efficientnet_pytorch/utils.py#L598
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
state_dict = torch.load(weights_path, map_location=device)
I wonder if I can load the model parameters to GPU correctly by simply write:
model = EfficientNet.from_pretrained('efficientnet-b3')
model.cuda()
Actually, this might cause more problems than the benefits of the possible ease of use. Consider the case where one uses torch.nn.DistributedDataParallel, there is no way to ensure that torch.device("cuda") will return the appropriate gpu when there are multiple nodes.
Not sure, don't have much experience in ML stuff. I don't have a PC with a CUDA compatible GPU. This package did not run on my machine but when I added the above code-block to the package and modified it's source it ran fine. That's why I suggested adding that block.