totalsegmentator changes global torch settings without restoring original state
Hey all, I noticed that global torch settings are changed when totalsegmentator.python_api.totalsegmentator is called. They are not restored once the function finishes and can lead to significant performance degradation of follow-up torch functions.
I observed the following settings being changed:
"torch_settings.num_threads": { # torch.set_num_threads(...)
"old": 8,
"new": 1
},
"cudnn_settings.benchmark": { # torch.backends.cudnn.benchmark = ...
"old": false,
"new": true
}
Please make sure that totalsegmentator has no such side effects and either restores the global state or is isolated in it's own process.
This is happening somewhere in the nnunet package. I will investigate that but might take some time. A fast solution would be to call TotalSegmentator from within python as a shell command via subprocess.call. Not very elegant but works.
I committed a fix here: https://github.com/wasserth/TotalSegmentator/commit/1de45113756dad7c8d2d489fbc98a14d2e4c9f9a
This is not in master yet. I will merge the branch when a few other features are ready.