quickNAT_pytorch
quickNAT_pytorch copied to clipboard
"cpu" or cpu not supported as device in settings_eval.ini
When i set device to "cpu" in settings_eval.ini, just to test cpu performance, i get:
Traceback (most recent call last): File "run.py", line 187, in
evaluate_bulk(settings_eval['EVAL_BULK']) File "run.py", line 136, in evaluate_bulk mc_samples) File "/home/diedre/git/quickNAT_pytorch/utils/evaluator.py", line 260, in evaluate model.cuda(device) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 311, in cuda return self._apply(lambda t: t.cuda(device)) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 208, in _apply module._apply(fn) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 208, in _apply module._apply(fn) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 230, in _apply param_applied = fn(param) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 311, in return self._apply(lambda t: t.cuda(device)) RuntimeError: Invalid device, must be cuda device
Seems like an easy fix, change of using model.cuda(device)
to model.to(device)
, with device being redefined as something like:
torch.device("cpu") if device == "cpu" else torch.device("cuda:{}".format(device))
I also tested cpu, and got:
Traceback (most recent call last): File "run.py", line 186, in
settings_eval = Settings('settings_eval.ini') File "/home/diedre/git/quickNAT_pytorch/settings.py", line 10, in init self.settings_dict = _parse_values(config) File "/home/diedre/git/quickNAT_pytorch/settings.py", line 27, in _parse_values config_parsed[section][key] = ast.literal_eval(value) File "/usr/lib/python3.6/ast.py", line 85, in literal_eval return _convert(node_or_string) File "/usr/lib/python3.6/ast.py", line 84, in _convert raise ValueError('malformed node or string: ' + repr(node)) ValueError: malformed node or string: <_ast.Name object at 0x7f13134c55c0>
The code probably expects an int or string.
I am getting a similar error, when trying to run the eval_bulk command. The error is:
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
Following community suggestions, I modify line 33 in run.py to the below, but I am still getting the same error.
quicknat_model = torch.load(train_params['pre_trained_path'], map_location=torch.device('cpu'))
Altering utils/evaluator.py gives the desired functionality (starting line 256):
`
cuda_available = torch.cuda.is_available()
if cuda_available:
model = torch.load(coronal_model_path)
torch.cuda.empty_cache()
model.cuda(device)
else:
model = torch.load(coronal_model_path, map_location=torch.device('cpu'))
`