Errors with numpy >= 2
Hi, very recently numpy 2 has been released. It seems that this is causing the following error:
A module that was compiled using NumPy 1.x cannot be run in
NumPy 2.0.0 as it may crash. To support both 1.x and 2.x
versions of NumPy, modules must be compiled with NumPy 2.0.
Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.
If you are a user of the module, the easiest solution will be to
downgrade to 'numpy<2' or try to upgrade the affected module.
We expect that some modules will need time to support NumPy 2.
Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.10.14/x64/bin/TotalSegmentator", line 8, in <module>
sys.exit(main())
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/site-packages/totalsegmentator/bin/TotalSegmentator.py", line 137, in main
totalsegmentator(args.input, args.output, args.ml, args.nr_thr_resamp, args.nr_thr_saving,
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/site-packages/totalsegmentator/python_api.py", line 407, in totalsegmentator
body_seg, _, _ = nnUNet_predict_image(input, None, 300, model="3d_fullres", folds=[0],
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/site-packages/totalsegmentator/nnunet.py", line 477, in nnUNet_predict_image
nnUNetv2_predict(tmp_dir, tmp_dir, task_id, model, folds, trainer, tta,
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/site-packages/totalsegmentator/nnunet.py", line 245, in nnUNetv2_predict
predictor.initialize_from_trained_model_folder(
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/site-packages/nnunetv2/inference/predict_from_raw_data.py", line 97, in initialize_from_trained_model_folder
trainer_class = recursive_find_python_class(join(nnunetv2.path[0], "training", "nnUNetTrainer"),
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/site-packages/nnunetv2/utilities/find_class_by_name.py", line 12, in recursive_find_python_class
m = importlib.import_module(current_module + "." + modname)
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/importlib/init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/site-packages/nnunetv2/training/nnUNetTrainer/nnUNetTrainer.py", line 34, in <module>
from nnunetv2.training.data_augmentation.custom_transforms.cascade_transforms import MoveSegAsOneHotToData,
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/site-packages/nnunetv2/training/data_augmentation/custom_transforms/cascade_transforms.py", line 4, in <module>
from acvl_utils.morphology.morphology_helper import label_with_component_sizes
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/site-packages/acvl_utils/morphology/morphology_helper.py", line 7, in <module>
import cc3d
Traceback (most recent call last):
File "/opt/hostedtoolcache/Python/3.10.14/x64/bin/TotalSegmentator", line 8, in <module>
sys.exit(main())
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/site-packages/totalsegmentator/bin/TotalSegmentator.py", line 137, in main
totalsegmentator(args.input, args.output, args.ml, args.nr_thr_resamp, args.nr_thr_saving,
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/site-packages/totalsegmentator/python_api.py", line 407, in totalsegmentator
body_seg, _, _ = nnUNet_predict_image(input, None, 300, model="3d_fullres", folds=[0],
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/site-packages/totalsegmentator/nnunet.py", line 477, in nnUNet_predict_image
nnUNetv2_predict(tmp_dir, tmp_dir, task_id, model, folds, trainer, tta,
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/site-packages/totalsegmentator/nnunet.py", line 245, in nnUNetv2_predict
predictor.initialize_from_trained_model_folder(
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/site-packages/nnunetv2/inference/predict_from_raw_data.py", line 97, in initialize_from_trained_model_folder
trainer_class = recursive_find_python_class(join(nnunetv2.path[0], "training", "nnUNetTrainer"),
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/site-packages/nnunetv2/utilities/find_class_by_name.py", line 12, in recursive_find_python_class
m = importlib.import_module(current_module + "." + modname)
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/importlib/init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/site-packages/nnunetv2/training/nnUNetTrainer/nnUNetTrainer.py", line 34, in <module>
from nnunetv2.training.data_augmentation.custom_transforms.cascade_transforms import MoveSegAsOneHotToData,
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/site-packages/nnunetv2/training/data_augmentation/custom_transforms/cascade_transforms.py", line 4, in <module>
from acvl_utils.morphology.morphology_helper import label_with_component_sizes
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/site-packages/acvl_utils/morphology/morphology_helper.py", line 7, in <module>
import cc3d
File "cc3d.pyx", line 1, in init cc3d
ImportError: numpy.core.multiarray failed to import (auto-generated because you didn't call 'numpy.import_array()' after cimporting numpy; use '<void>numpy._import_array' to disable if you are certain you don't need it).
It seems that cc3d does not work with numpy 2 yet. Maybe this dependency can be removed or only imported if it is really required. Otherwise it can make sense to require numpy<2.
I see a couple of tickets related to numpy v2.0.0 (#2295, #2308). I have slightly different symptoms, but effectively the same underlying problem. Would it be best to use this ticket as a catch-all, or start a new one?
My particular problem is that I trained a model in a brand new virtual environment, and of course that environment installed the newest numpy. When I try to import that model for inference, and include it as part of a larger workflow, torch.load fails. Because of the other parts of that workflow, I would prefer not to update numpy. I did try, along with all the other related packages (pandas, matplotlib etcetc) but still had errors, many of which were various packages failing to import in a similar way to the one in @wasserth's post above.
I would strongly recommend requiring numpy<2 while other dependencies update and stabilise.
If anybody has a suggestion for how to load my trained model with an older version of numpy, I would very much appreciate it.
Update: I didn't expect this to work, but I have managed to load the model and run inference with numpy==1.26.4 and pandas==2.2.1.
Is there any update if numpy > 2 is getting supported?
Is there any update?
here is the numpy 2.0 compatibility table: https://github.com/numpy/numpy/issues/26191
It seems the problematic package connected-components-3d from @wasserth's error message was upgraded (compiled against numpy>2.0 since June 17, 2024): https://github.com/seung-lab/connected-components-3d/issues/125