nnUNet icon indicating copy to clipboard operation
nnUNet copied to clipboard

Python 3.11: RuntimeError: One or more background workers are no longer alive. Exiting. Please check the print statements above for the actual error message

Open vmiller987 opened this issue 9 months ago • 17 comments

Hello, I have been unable to use the nnUNet library as intended due to a FATAL FIPS error. We believe we traced that issue down, however, I'm still having issues.

Commands I used

python -m venv .venv pip install nnnunetv2 pip uninstall python-gdcm # This must be done to avoid FATAL FIPS error #2430 , This isn't even used I believe

nnUNetv2_plan_and_preprocess -d 501 --verify_dataset_integrity

nnUNetv2_train 501 2d 0

Output

(.venv-nn) [vmiller@gluskap caml]$ nnUNetv2_train 501 2d 0

############################
INFO: You are using the old nnU-Net default plans. We have updated our recommendations. Please consider using those instead! Read more here: https://github.com/MIC-DKFZ/nnUNet/blob/master/documentation/resenc_presets.md
############################

Using device: cuda:0

#######################################################################
Please cite the following paper when using nnU-Net:
Isensee, F., Jaeger, P. F., Kohl, S. A., Petersen, J., & Maier-Hein, K. H. (2021). nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nature methods, 18(2), 203-211.
#######################################################################

2025-03-21 16:36:52.792056: Using torch.compile...
2025-03-21 16:36:53.943590: do_dummy_2d_data_aug: False
2025-03-21 16:36:53.944241: Using splits from existing split file: /mnt/cleo/nnUNet/nnUNet_preprocessed/Dataset501_M662_nnUNet/splits_final.json
2025-03-21 16:36:53.944771: The split file contains 5 splits.
2025-03-21 16:36:53.945225: Desired fold for training: 0
2025-03-21 16:36:53.945703: This split has 24 training and 6 validation cases.
using pin_memory on device 0
Exception in thread Thread-1 (results_loop):
Traceback (most recent call last):
  File "/usr/lib64/python3.11/threading.py", line 1045, in _bootstrap_inner
Traceback (most recent call last):
  File "/home/vmiller/work/caml2/caml/.venv-nn/bin/nnUNetv2_train", line 8, in <module>
    self.run()
  File "/usr/lib64/python3.11/threading.py", line 982, in run
    sys.exit(run_training_entry())
             ^^^^^^^^^^^^^^^^^^^^
  File "/home/vmiller/work/caml2/caml/.venv-nn/lib64/python3.11/site-packages/nnunetv2/run/run_training.py", line 267, in run_training_entry
    self._target(*self._args, **self._kwargs)
  File "/home/vmiller/work/caml2/caml/.venv-nn/lib64/python3.11/site-packages/batchgenerators/dataloading/nondet_multi_threaded_augmenter.py", line 125, in results_loop
    run_training(args.dataset_name_or_id, args.configuration, args.fold, args.tr, args.p, args.pretrained_weights,
  File "/home/vmiller/work/caml2/caml/.venv-nn/lib64/python3.11/site-packages/nnunetv2/run/run_training.py", line 207, in run_training
    raise e
  File "/home/vmiller/work/caml2/caml/.venv-nn/lib64/python3.11/site-packages/batchgenerators/dataloading/nondet_multi_threaded_augmenter.py", line 103, in results_loop
    nnunet_trainer.run_training()
  File "/home/vmiller/work/caml2/caml/.venv-nn/lib64/python3.11/site-packages/nnunetv2/training/nnUNetTrainer/nnUNetTrainer.py", line 1363, in run_training
    raise RuntimeError("One or more background workers are no longer alive. Exiting. Please check the "
RuntimeError: One or more background workers are no longer alive. Exiting. Please check the print statements above for the actual error message
    self.on_train_start()
  File "/home/vmiller/work/caml2/caml/.venv-nn/lib64/python3.11/site-packages/nnunetv2/training/nnUNetTrainer/nnUNetTrainer.py", line 900, in on_train_start
    self.dataloader_train, self.dataloader_val = self.get_dataloaders()
                                                 ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/vmiller/work/caml2/caml/.venv-nn/lib64/python3.11/site-packages/nnunetv2/training/nnUNetTrainer/nnUNetTrainer.py", line 690, in get_dataloaders
    _ = next(mt_gen_train)
        ^^^^^^^^^^^^^^^^^^
  File "/home/vmiller/work/caml2/caml/.venv-nn/lib64/python3.11/site-packages/batchgenerators/dataloading/nondet_multi_threaded_augmenter.py", line 197, in __next__
    item = self.__get_next_item()
           ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/vmiller/work/caml2/caml/.venv-nn/lib64/python3.11/site-packages/batchgenerators/dataloading/nondet_multi_threaded_augmenter.py", line 182, in __get_next_item
    raise RuntimeError("One or more background workers are no longer alive. Exiting. Please check the "
RuntimeError: One or more background workers are no longer alive. Exiting. Please check the print statements above for the actual error message

Old Method

Prior, I cloned the repo, and accessed the code directly in order to avoid the FATAL FIPS error. Below is how I would usually do this.

import os

from nnunetv2.run.run_training import run_training

if __name__ == "__main__":
    # 2D U-Net All Folds
    os.environ["CUDA_VISIBLE_DEVICES"] = "7"

    for fold in ["all", 0, 1, 2, 3, 4]:
        print("\n****************************************")
        print("2D U-Net")
        print(f"Fold: {fold}")
        print("****************************************\n")

        run_training(
            dataset_name_or_id="Dataset500_M662_nnUNet",
            configuration="2d",
            fold=fold,
            plans_identifier="nnUNetResEncUNetMPlans",
            num_gpus=1,
            export_validation_probabilities=True,
            continue_training=True,
        )

    print("\nAll folds completed successfully.")
    for fold in ["all", 0, 1, 2, 3, 4]:
        print(f"Fold {fold} completed successfully.")
    print("Training for all folds is now complete.")

Am I doing something incorrect?

vmiller987 avatar Mar 21 '25 20:03 vmiller987

export nnUNet_n_proc_DA=0 is the only way I can avoid this error.

I still get blasted by a wall of warnings. Assuming that's because this repository is no longer being maintained which is sad, because of how useful nnUNet is.

nn.txt

vmiller987 avatar Mar 31 '25 15:03 vmiller987

+1 I'm having the same RuntimeError issue

nrepina avatar Apr 02 '25 18:04 nrepina

Hey,

This is definitely a tricky one—we haven’t been able to reproduce it on our end. Based on the text file you shared, it seems the issue might be related to torch.compile. Could you try running it again with the latest version and set nnUNet_compile=False?

Hope that helps for now! P.S. nnU-Net is still actively maintained and regularly updated. We're just a small team doing our best to keep up with the high volume of issues, thanks to nnU-Net's growing popularity.

mrokuss avatar May 09 '25 15:05 mrokuss

could try this sudo apt install build-essential

933c933c avatar May 12 '25 09:05 933c933c

I'm not sure if this is completely relevant, but I managed to avoid a similar dead worker error by taking lines 38-40 in nnunetv2/training/dataloading/utils.py

np.load(data_npy, mmap_mode='r')
if isfile(seg_npy):
    np.load(seg_npy, mmap_mode = 'r')

and replacing them with

np.load(data_npy)
if isfile(seg_npy):
    np.load(seg_npy)

i.e. removing the mmap_mode option. mmap_mode creates a memory map object that reads the data from the disk as needed rather than loading it all into RAM first. Problem is Windows limits the number of these memory mapped files and their size, hence you can run into the WinError 8.

Deleting mmap_mode will automatically load all the data to RAM, which works if you have a lot of RAM on your machine (~1.5x the size of your nnUNet_preprocessed data for your model).

MattAWard avatar May 16 '25 09:05 MattAWard

Hey,

This is definitely a tricky one—we haven’t been able to reproduce it on our end. Based on the text file you shared, it seems the issue might be related to torch.compile. Could you try running it again with the latest version and set nnUNet_compile=False?

Hope that helps for now! P.S. nnU-Net is still actively maintained and regularly updated. We're just a small team doing our best to keep up with the high volume of issues, thanks to nnU-Net's growing popularity.

Thanks for the reply!

nnUNet_compile=False removes a lot of the torch dynamo errors, but it does not resolve this issue. I can start training if I do export nnUNet_n_proc_DA=0. This applies to ALL of my datasets at this point.

vmiller987 avatar May 17 '25 12:05 vmiller987

I'm not sure if this is completely relevant, but I managed to avoid a similar dead worker error by taking lines 38-40 in nnunetv2/training/dataloading/utils.py

np.load(data_npy, mmap_mode='r') if isfile(seg_npy): np.load(seg_npy, mmap_mode = 'r') and replacing them with

np.load(data_npy) if isfile(seg_npy): np.load(seg_npy) i.e. removing the mmap_mode option. mmap_mode creates a memory map object that reads the data from the disk as needed rather than loading it all into RAM first. Problem is Windows limits the number of these memory mapped files and their size, hence you can run into the WinError 8.

Deleting mmap_mode will automatically load all the data to RAM, which works if you have a lot of RAM on your machine (~1.5x the size of your nnUNet_preprocessed data for your model).

Thank you for the suggestion! When I have some time next week, I will certainly give this a try as we have ~750GB of RAM. Setting export nnUNet_n_proc_DA=0 is the only way I have been able to make any trainings work so far. I am also having background workers die on some of my larger images during preprocessing, and I'm wondering if there is an image size limit that nnUNet can handle.

vmiller987 avatar May 17 '25 12:05 vmiller987

@mrokuss

I did not provide you with a log for the command you wanted me to add. It does solve the wall of warnings, but does not resolve the background workers dying.

This is a different dataset, but it's reproduced for all of our datasets.

(nnUNetv2) [vmiller@gluskap nnUNetv2]$ CUDA_VISIBLE_DEVICES=5 nnUNet_compile=False nnUNetv2_train 802 3d_cascade_fullres all -p nnUNetResEncUNetMPlans
Using device: cuda:0

#######################################################################
Please cite the following paper when using nnU-Net:
Isensee, F., Jaeger, P. F., Kohl, S. A., Petersen, J., & Maier-Hein, K. H. (2021). nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nature methods, 18(2), 203-211.
#######################################################################

2025-05-28 15:30:52.563251: do_dummy_2d_data_aug: True
using pin_memory on device 0
Exception in thread Thread-1 (results_loop):
Traceback (most recent call last):
  File "/usr/lib64/python3.11/threading.py", line 1045, in _bootstrap_inner
TEST True
    self.run()
  File "/usr/lib64/python3.11/threading.py", line 982, in run
Traceback (most recent call last):
  File "/home/vmiller/work/caml/nnUNetv2/.venv/bin/nnUNetv2_train", line 8, in <module>
    self._target(*self._args, **self._kwargs)
  File "/home/vmiller/work/caml/nnUNetv2/.venv/lib64/python3.11/site-packages/batchgenerators/dataloading/nondet_multi_threaded_augmenter.py", line 125, in results_loop
    sys.exit(run_training_entry())
             ^^^^^^^^^^^^^^^^^^^^
  File "/home/vmiller/work/caml/nnUNetv2/.venv/lib64/python3.11/site-packages/nnunetv2/run/run_training.py", line 267, in run_training_entry
    run_training(args.dataset_name_or_id, args.configuration, args.fold, args.tr, args.p, args.pretrained_weights,
    raise e
  File "/home/vmiller/work/caml/nnUNetv2/.venv/lib64/python3.11/site-packages/nnunetv2/run/run_training.py", line 207, in run_training
  File "/home/vmiller/work/caml/nnUNetv2/.venv/lib64/python3.11/site-packages/batchgenerators/dataloading/nondet_multi_threaded_augmenter.py", line 103, in results_loop
    nnunet_trainer.run_training()
  File "/home/vmiller/work/caml/nnUNetv2/.venv/lib64/python3.11/site-packages/nnunetv2/training/nnUNetTrainer/nnUNetTrainer.py", line 1364, in run_training
    raise RuntimeError("One or more background workers are no longer alive. Exiting. Please check the "
RuntimeError: One or more background workers are no longer alive. Exiting. Please check the print statements above for the actual error message
    self.on_train_start()
  File "/home/vmiller/work/caml/nnUNetv2/.venv/lib64/python3.11/site-packages/nnunetv2/training/nnUNetTrainer/nnUNetTrainer.py", line 901, in on_train_start
    self.dataloader_train, self.dataloader_val = self.get_dataloaders()
                                                 ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/vmiller/work/caml/nnUNetv2/.venv/lib64/python3.11/site-packages/nnunetv2/training/nnUNetTrainer/nnUNetTrainer.py", line 691, in get_dataloaders
    _ = next(mt_gen_train)
        ^^^^^^^^^^^^^^^^^^
  File "/home/vmiller/work/caml/nnUNetv2/.venv/lib64/python3.11/site-packages/batchgenerators/dataloading/nondet_multi_threaded_augmenter.py", line 197, in __next__
    item = self.__get_next_item()
           ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/vmiller/work/caml/nnUNetv2/.venv/lib64/python3.11/site-packages/batchgenerators/dataloading/nondet_multi_threaded_augmenter.py", line 182, in __get_next_item
    raise RuntimeError("One or more background workers are no longer alive. Exiting. Please check the "
RuntimeError: One or more background workers are no longer alive. Exiting. Please check the print statements above for the actual error message
(nnUNetv2) [vmiller@gluskap nnUNetv2]$ export nnUNet_n_proc_DA=0
(nnUNetv2) [vmiller@gluskap nnUNetv2]$ CUDA_VISIBLE_DEVICES=5 nnUNet_compile=False nnUNetv2_train 802 3d_cascade_fullres all -p nnUNetResEncUNetMPlans
Using device: cuda:0

#######################################################################
Please cite the following paper when using nnU-Net:
Isensee, F., Jaeger, P. F., Kohl, S. A., Petersen, J., & Maier-Hein, K. H. (2021). nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nature methods, 18(2), 203-211.
#######################################################################

2025-05-28 15:31:11.662495: do_dummy_2d_data_aug: True

This is the configuration used by this training:
Configuration name: 3d_cascade_fullres
 {'data_identifier': 'nnUNetPlans_3d_fullres', 'preprocessor_name': 'DefaultPreprocessor', 'batch_size': 2, 'patch_size': [80, 352, 80], 'median_image_size_in_voxels': [512.0, 2303.0, 512.0], 'spacing': [0.976562, 0.625, 0.976562], 'normalization_schemes': ['CTNormalization'], 'use_mask_for_norm': [False], 'resampling_fn_data': 'resample_data_or_seg_to_shape', 'resampling_fn_seg': 'resample_data_or_seg_to_shape', 'resampling_fn_data_kwargs': {'is_seg': False, 'order': 3, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_seg_kwargs': {'is_seg': True, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_probabilities': 'resample_data_or_seg_to_shape', 'resampling_fn_probabilities_kwargs': {'is_seg': False, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'architecture': {'network_class_name': 'dynamic_network_architectures.architectures.unet.ResidualEncoderUNet', 'arch_kwargs': {'n_stages': 6, 'features_per_stage': [32, 64, 128, 256, 320, 320], 'conv_op': 'torch.nn.modules.conv.Conv3d', 'kernel_sizes': [[3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3]], 'strides': [[1, 1, 1], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [1, 2, 1]], 'n_blocks_per_stage': [1, 3, 4, 6, 6, 6], 'n_conv_per_stage_decoder': [1, 1, 1, 1, 1], 'conv_bias': True, 'norm_op': 'torch.nn.modules.instancenorm.InstanceNorm3d', 'norm_op_kwargs': {'eps': 1e-05, 'affine': True}, 'dropout_op': None, 'dropout_op_kwargs': None, 'nonlin': 'torch.nn.LeakyReLU', 'nonlin_kwargs': {'inplace': True}}, '_kw_requires_import': ['conv_op', 'norm_op', 'dropout_op', 'nonlin']}, 'batch_dice': True, 'inherits_from': '3d_fullres', 'previous_stage': '3d_lowres'} 

These are the global plan.json settings:
 {'dataset_name': 'Dataset802_Bundles', 'plans_name': 'nnUNetResEncUNetMPlans', 'original_median_spacing_after_transp': [0.976562, 0.625, 0.976562], 'original_median_shape_after_transp': [512, 2303, 512], 'image_reader_writer': 'SimpleITKIO', 'transpose_forward': [1, 0, 2], 'transpose_backward': [1, 0, 2], 'experiment_planner_used': 'nnUNetPlannerResEncM', 'label_manager': 'LabelManager', 'foreground_intensity_properties_per_channel': {'0': {'max': 21658.0, 'mean': 174.65882873535156, 'median': 139.0, 'min': -4370.0, 'percentile_00_5': -17.0, 'percentile_99_5': 658.0, 'std': 168.3129425048828}}} 

2025-05-28 15:31:12.606025: Unable to plot network architecture:
2025-05-28 15:31:12.608189: No module named 'hiddenlayer'
2025-05-28 15:31:12.617018: 
2025-05-28 15:31:12.618221: Epoch 0
2025-05-28 15:31:12.619102: Current learning rate: 0.01


vmiller987 avatar May 28 '25 19:05 vmiller987

There should be no warnings with the current master and pytorch 2.7.0. Please install the master with pip install git+https://github.com/MIC-DKFZ/nnUNet.git.

The master also fixes some smaller problems, for example workers dying during preprocessing due to some blosc2 stuff (that I am still investigating). This should however not affect your training crashes.

Using the background workers for data augmentation requires your system to allow for tensor transfer between background workers and the main python process. This requires shared memory. You can check how much you have with df -h /dev/shm. For me this is 31G, about half my RAM size. Maybe your issues are related to that?

You are not running in a Docker or in WSL, are you?

Best, Fabian

FabianIsensee avatar May 28 '25 20:05 FabianIsensee

We have also had people with hardware issues report such problems in the past. Just to be sure, could you check your RAM for errors with memtest86? Is this happening just on your workstation or also others in your department? What operation system / linux distro are you using? Are you experiencing errors with other dataloading infrastructure such as pytorchs dataloaders?

FabianIsensee avatar May 28 '25 20:05 FabianIsensee

@FabianIsensee Thank you for the input!

We're on opposite sides of the world, so I'm going to give a short answer, and work on a longer more detailed answer tomorrow. I've done a bunch of testing today but I have more testing to do still. I apologize if my message is a bit unorganized.

We have two machines using Red Hat Enterprise Linux 9.5. Our main machine has eight RTX 4090s and 750Gb of RAM. We have a second machine with one A6000 and ~128Gb of RAM.

I have created venv's using nnUNet's master and pytorch 2.7 as you have directed. I do have to pip uninstall python-gdcm in order to avoid the Fatal Fips error (another issue I made awhile back). This is a dependency of dicom2nifti and this PR would resolve it (#2748 ).

We are not using Docker or WSL. We are running directly in the linux environment through VNC/SSH.

Both machines require export nnUNet_n_proc_DA=0 to be set in order to train. Otherwise I get the background workers die message.

One peculiar thing is main machine can only preprocess this dataset for 3d_lowres and not 2d or 3d_fullres. But our secondary machine is the opposite. It can preprocess 2d and 3d_fullres but it cannot preprocess 3d_lowres. This is with identical data.

I will work on finishing tests, running memtest86, etc., tomorrow. I do not have issues with using pytorch's dataloaders with my very novice UNet implementation built with Lightning and MONAI.

vmiller987 avatar May 29 '25 20:05 vmiller987

That’s really strange, especially that preprocessing only works for different configurations on each machine. We’ve got pretty similar setups here (mostly non–Red Hat Linux with 3090s, 4090s, A100s, ..) and haven’t run into issues like that.

Could you share a bit more about the dataset? Things like number of cases, modalities, image sizes, anything non-standard? Thanks for digging into it—curious to hear what you find as you test more.

mrokuss avatar May 29 '25 20:05 mrokuss

Please (re)run all preprocessings with the current master. It could be the preprocessed data is messed up. Preprocessing should run smoothly - please let us know if not! The big issue for us is that we don't have a setup with which we can reproduce your issues, so it's really difficult to fix. You don't have the opportunity to test a different distro by chance, do you? We know Ubuntu works - maybe it really is a distro related problem. I spent >4 hours this week tracking down a weird bug and in the end it turned out to be my linux kernel which was at fault. Upgrading from 6.11 to 6.13 solved it. I am not saying this is the case for you, but something like this can happen and it's super hard to debug. Even chatGPT (which usually helps a ton) was completely clueless on that one

FabianIsensee avatar May 30 '25 11:05 FabianIsensee

I am happy to report that my issues have been resolved but I feel like a complete novice over the final solution.

  1. Preprocessing has been resolved (#2792 ). I don't have this issue on either machine now. The secondary machine failing lowres was a skill issue. If I actually read the stack trace, it stated RAM out-of-memory (Only 128Gb). I have ran preprocessing multiple times across both machines to confirm. I am not sure what I did wrong yesterday on the main machine when I tested master. I even checked my terminal history to confirm I did it correctly yesterday.

  2. Training background workers crashing (This issue): The issue is Python 3.11.

  • Python 3.9: Not compatible with nnUNet. The pyproject.toml states this. The installation_instructions.md is outdated as it states 3.9+.
  • Python 3.10: No issue
  • Python 3.11: I must do export nnUNet_n_proc_DA=0 in order to train.
  • Python 3.12: No issue.
  • Python 3.13: No issue.

Several months ago, I was having compatibility issues with a separate project and only 3.11 worked. So I stayed on this version. I should have tested different versions sooner.

I apologize if this ended up being a waste of time on your end. I certainly feel like a novice after this. I'm not sure what has broken in 3.11, but I shall just start using a more up-to-date version of Python now...

I greatly appreciate both of you for providing help. I have learned a lot.

vmiller987 avatar May 30 '25 18:05 vmiller987

I updated the title to include Python 3.11. Are you able to replicate the issue using this version of Python? I was able to replicate this bug with more than one dataset.

I just received a new dataset with absolutely massive images. (~2100, ~2100, ~3000) with spacings of ~0.23. Later this week/next week, I'm going to try and create some dummy/test labels and see how nnUNet handles them.

vmiller987 avatar Jun 02 '25 13:06 vmiller987

I cannot reproduce this problem using 3.11. I just created a fresh conda env and installed nnunet via pip - everything works as expected:

conda create -n deleteme python=3.11
conda activate deleteme
pip install nnunetv2
nnUNetv2_train 4 3d_fullres 0 -tr nnUNetTrainer_5epochs

(I like to run short 5 epoch tests on MSD hippocampus) If you go through these steps, does it also fail? Please use miniforge, not conda conda (olr you may run into license issues).

FabianIsensee avatar Jun 03 '25 07:06 FabianIsensee

Unfortunately, it persists. I must add export nnUNet_n_proc_DA=0 in order for it to train.

As this issue persists with the Hippocampus dataset, does @mrokuss still want information on our datasets?

I tested this on both of our production machines. They have completely different hardware, but they both use the same RHEL 9.5 OS. Since it's happening across both of our machines, I'm leaning towards this might an an OS conflict.

I believe my next step should be to install Ubuntu and test whether it's an issue with RHEL. I need to schedule time in order to bring our primary machine offline. My plan was to setup Ubuntu on a usb drive and boot this way instead of operating on a VM. It may take me a bit before I have information on this.

Primary ML Machine

[vmiller@gluskap ~]$ /home/vmiller/miniforge/bin/conda create -n deleteme python=3.11
Channels:
 - conda-forge
Platform: linux-64
Collecting package metadata (repodata.json): done
Solving environment: done


==> WARNING: A newer version of conda exists. <==
    current version: 25.3.0
    latest version: 25.5.0

Please update conda by running

    $ conda update -n base -c conda-forge conda



## Package Plan ##

  environment location: /home/vmiller/miniforge/envs/deleteme

  added / updated specs:
    - python=3.11


The following NEW packages will be INSTALLED:

  _libgcc_mutex      conda-forge/linux-64::_libgcc_mutex-0.1-conda_forge 
  _openmp_mutex      conda-forge/linux-64::_openmp_mutex-4.5-2_gnu 
  bzip2              conda-forge/linux-64::bzip2-1.0.8-h4bc722e_7 
  ca-certificates    conda-forge/noarch::ca-certificates-2025.4.26-hbd8a1cb_0 
  ld_impl_linux-64   conda-forge/linux-64::ld_impl_linux-64-2.43-h712a8e2_4 
  libexpat           conda-forge/linux-64::libexpat-2.7.0-h5888daf_0 
  libffi             conda-forge/linux-64::libffi-3.4.6-h2dba641_1 
  libgcc             conda-forge/linux-64::libgcc-15.1.0-h767d61c_2 
  libgcc-ng          conda-forge/linux-64::libgcc-ng-15.1.0-h69a702a_2 
  libgomp            conda-forge/linux-64::libgomp-15.1.0-h767d61c_2 
  liblzma            conda-forge/linux-64::liblzma-5.8.1-hb9d3cd8_1 
  libnsl             conda-forge/linux-64::libnsl-2.0.1-hd590300_0 
  libsqlite          conda-forge/linux-64::libsqlite-3.50.0-hee588c1_0 
  libuuid            conda-forge/linux-64::libuuid-2.38.1-h0b41bf4_0 
  libxcrypt          conda-forge/linux-64::libxcrypt-4.4.36-hd590300_1 
  libzlib            conda-forge/linux-64::libzlib-1.3.1-hb9d3cd8_2 
  ncurses            conda-forge/linux-64::ncurses-6.5-h2d0b736_3 
  openssl            conda-forge/linux-64::openssl-3.5.0-h7b32b05_1 
  pip                conda-forge/noarch::pip-25.1.1-pyh8b19718_0 
  python             conda-forge/linux-64::python-3.11.12-h9e4cc4f_0_cpython 
  readline           conda-forge/linux-64::readline-8.2-h8c095d6_2 
  setuptools         conda-forge/noarch::setuptools-80.9.0-pyhff2d567_0 
  tk                 conda-forge/linux-64::tk-8.6.13-noxft_hd72426e_102 
  tzdata             conda-forge/noarch::tzdata-2025b-h78e105d_0 
  wheel              conda-forge/noarch::wheel-0.45.1-pyhd8ed1ab_1 


Proceed ([y]/n)? y


Downloading and Extracting Packages:

Preparing transaction: done
Verifying transaction: done
Executing transaction: done
#
# To activate this environment, use
#
#     $ conda activate deleteme
#
# To deactivate an active environment, use
#
#     $ conda deactivate

[vmiller@gluskap ~]$ source ~/.bashrc
(base) [vmiller@gluskap ~]$ conda activate deleteme
(deleteme) [vmiller@gluskap ~]$ pip install nnunetv2
Collecting nnunetv2
  Using cached nnunetv2-2.6.2-py3-none-any.whl
Collecting torch>=2.1.2 (from nnunetv2)
  Using cached torch-2.7.0-cp311-cp311-manylinux_2_28_x86_64.whl.metadata (29 kB)
Collecting acvl-utils<0.3,>=0.2.3 (from nnunetv2)
  Using cached acvl_utils-0.2.5-py3-none-any.whl
Collecting dynamic-network-architectures<0.5,>=0.4.1 (from nnunetv2)
  Using cached dynamic_network_architectures-0.4.1-py3-none-any.whl
Collecting tqdm (from nnunetv2)
  Using cached tqdm-4.67.1-py3-none-any.whl.metadata (57 kB)
Collecting scipy (from nnunetv2)
  Using cached scipy-1.15.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (61 kB)
Collecting batchgenerators>=0.25.1 (from nnunetv2)
  Using cached batchgenerators-0.25.1-py3-none-any.whl
Collecting numpy>=1.24 (from nnunetv2)
  Using cached numpy-2.2.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (62 kB)
Collecting scikit-learn (from nnunetv2)
  Using cached scikit_learn-1.6.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (18 kB)
Collecting scikit-image>=0.19.3 (from nnunetv2)
  Using cached scikit_image-0.25.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (14 kB)
Collecting SimpleITK>=2.2.1 (from nnunetv2)
  Using cached simpleitk-2.5.0-cp311-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (7.2 kB)
Collecting pandas (from nnunetv2)
  Using cached pandas-2.2.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (89 kB)
Collecting graphviz (from nnunetv2)
  Using cached graphviz-0.20.3-py3-none-any.whl.metadata (12 kB)
Collecting tifffile (from nnunetv2)
  Using cached tifffile-2025.6.1-py3-none-any.whl.metadata (32 kB)
Collecting requests (from nnunetv2)
  Using cached requests-2.32.3-py3-none-any.whl.metadata (4.6 kB)
Collecting nibabel (from nnunetv2)
  Using cached nibabel-5.3.2-py3-none-any.whl.metadata (9.1 kB)
Collecting matplotlib (from nnunetv2)
  Using cached matplotlib-3.10.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB)
Collecting seaborn (from nnunetv2)
  Using cached seaborn-0.13.2-py3-none-any.whl.metadata (5.4 kB)
Collecting imagecodecs (from nnunetv2)
  Using cached imagecodecs-2025.3.30-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (20 kB)
Collecting yacs (from nnunetv2)
  Using cached yacs-0.1.8-py3-none-any.whl.metadata (639 bytes)
Collecting batchgeneratorsv2>=0.3.0 (from nnunetv2)
  Using cached batchgeneratorsv2-0.3.0-py3-none-any.whl
Collecting einops (from nnunetv2)
  Using cached einops-0.8.1-py3-none-any.whl.metadata (13 kB)
Collecting blosc2>=3.0.0b1 (from nnunetv2)
  Using cached blosc2-3.3.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (6.9 kB)
Collecting connected-components-3d (from acvl-utils<0.3,>=0.2.3->nnunetv2)
  Using cached connected_components_3d-3.23.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (32 kB)
Collecting timm (from dynamic-network-architectures<0.5,>=0.4.1->nnunetv2)
  Using cached timm-1.0.15-py3-none-any.whl.metadata (52 kB)
Collecting pillow>=7.1.2 (from batchgenerators>=0.25.1->nnunetv2)
  Using cached pillow-11.2.1-cp311-cp311-manylinux_2_28_x86_64.whl.metadata (8.9 kB)
Collecting future (from batchgenerators>=0.25.1->nnunetv2)
  Using cached future-1.0.0-py3-none-any.whl.metadata (4.0 kB)
Collecting unittest2 (from batchgenerators>=0.25.1->nnunetv2)
  Using cached unittest2-1.1.0-py2.py3-none-any.whl.metadata (15 kB)
Collecting threadpoolctl (from batchgenerators>=0.25.1->nnunetv2)
  Using cached threadpoolctl-3.6.0-py3-none-any.whl.metadata (13 kB)
Collecting fft-conv-pytorch (from batchgeneratorsv2>=0.3.0->nnunetv2)
  Using cached fft_conv_pytorch-1.2.0-py3-none-any.whl.metadata (2.8 kB)
Collecting ndindex (from blosc2>=3.0.0b1->nnunetv2)
  Using cached ndindex-1.10.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.6 kB)
Collecting msgpack (from blosc2>=3.0.0b1->nnunetv2)
  Using cached msgpack-1.1.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (8.4 kB)
Collecting platformdirs (from blosc2>=3.0.0b1->nnunetv2)
  Using cached platformdirs-4.3.8-py3-none-any.whl.metadata (12 kB)
Collecting numexpr (from blosc2>=3.0.0b1->nnunetv2)
  Using cached numexpr-2.10.2-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl.metadata (8.1 kB)
Collecting py-cpuinfo (from blosc2>=3.0.0b1->nnunetv2)
  Using cached py_cpuinfo-9.0.0-py3-none-any.whl.metadata (794 bytes)
Collecting networkx>=3.0 (from scikit-image>=0.19.3->nnunetv2)
  Using cached networkx-3.5-py3-none-any.whl.metadata (6.3 kB)
Collecting imageio!=2.35.0,>=2.33 (from scikit-image>=0.19.3->nnunetv2)
  Using cached imageio-2.37.0-py3-none-any.whl.metadata (5.2 kB)
Collecting packaging>=21 (from scikit-image>=0.19.3->nnunetv2)
  Using cached packaging-25.0-py3-none-any.whl.metadata (3.3 kB)
Collecting lazy-loader>=0.4 (from scikit-image>=0.19.3->nnunetv2)
  Using cached lazy_loader-0.4-py3-none-any.whl.metadata (7.6 kB)
Collecting filelock (from torch>=2.1.2->nnunetv2)
  Using cached filelock-3.18.0-py3-none-any.whl.metadata (2.9 kB)
Collecting typing-extensions>=4.10.0 (from torch>=2.1.2->nnunetv2)
  Using cached typing_extensions-4.14.0-py3-none-any.whl.metadata (3.0 kB)
Collecting sympy>=1.13.3 (from torch>=2.1.2->nnunetv2)
  Using cached sympy-1.14.0-py3-none-any.whl.metadata (12 kB)
Collecting jinja2 (from torch>=2.1.2->nnunetv2)
  Using cached jinja2-3.1.6-py3-none-any.whl.metadata (2.9 kB)
Collecting fsspec (from torch>=2.1.2->nnunetv2)
  Using cached fsspec-2025.5.1-py3-none-any.whl.metadata (11 kB)
Collecting nvidia-cuda-nvrtc-cu12==12.6.77 (from torch>=2.1.2->nnunetv2)
  Using cached nvidia_cuda_nvrtc_cu12-12.6.77-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cuda-runtime-cu12==12.6.77 (from torch>=2.1.2->nnunetv2)
  Using cached nvidia_cuda_runtime_cu12-12.6.77-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cuda-cupti-cu12==12.6.80 (from torch>=2.1.2->nnunetv2)
  Using cached nvidia_cuda_cupti_cu12-12.6.80-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cudnn-cu12==9.5.1.17 (from torch>=2.1.2->nnunetv2)
  Using cached nvidia_cudnn_cu12-9.5.1.17-py3-none-manylinux_2_28_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cublas-cu12==12.6.4.1 (from torch>=2.1.2->nnunetv2)
  Using cached nvidia_cublas_cu12-12.6.4.1-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cufft-cu12==11.3.0.4 (from torch>=2.1.2->nnunetv2)
  Using cached nvidia_cufft_cu12-11.3.0.4-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-curand-cu12==10.3.7.77 (from torch>=2.1.2->nnunetv2)
  Using cached nvidia_curand_cu12-10.3.7.77-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cusolver-cu12==11.7.1.2 (from torch>=2.1.2->nnunetv2)
  Using cached nvidia_cusolver_cu12-11.7.1.2-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cusparse-cu12==12.5.4.2 (from torch>=2.1.2->nnunetv2)
  Using cached nvidia_cusparse_cu12-12.5.4.2-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cusparselt-cu12==0.6.3 (from torch>=2.1.2->nnunetv2)
  Using cached nvidia_cusparselt_cu12-0.6.3-py3-none-manylinux2014_x86_64.whl.metadata (6.8 kB)
Collecting nvidia-nccl-cu12==2.26.2 (from torch>=2.1.2->nnunetv2)
  Using cached nvidia_nccl_cu12-2.26.2-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (2.0 kB)
Collecting nvidia-nvtx-cu12==12.6.77 (from torch>=2.1.2->nnunetv2)
  Using cached nvidia_nvtx_cu12-12.6.77-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-nvjitlink-cu12==12.6.85 (from torch>=2.1.2->nnunetv2)
  Using cached nvidia_nvjitlink_cu12-12.6.85-py3-none-manylinux2010_x86_64.manylinux_2_12_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cufile-cu12==1.11.1.6 (from torch>=2.1.2->nnunetv2)
  Using cached nvidia_cufile_cu12-1.11.1.6-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.5 kB)
Collecting triton==3.3.0 (from torch>=2.1.2->nnunetv2)
  Using cached triton-3.3.0-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl.metadata (1.5 kB)
Requirement already satisfied: setuptools>=40.8.0 in ./miniforge/envs/deleteme/lib/python3.11/site-packages (from triton==3.3.0->torch>=2.1.2->nnunetv2) (80.9.0)
Collecting mpmath<1.4,>=1.1.0 (from sympy>=1.13.3->torch>=2.1.2->nnunetv2)
  Using cached mpmath-1.3.0-py3-none-any.whl.metadata (8.6 kB)
Collecting MarkupSafe>=2.0 (from jinja2->torch>=2.1.2->nnunetv2)
  Using cached MarkupSafe-3.0.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (4.0 kB)
Collecting contourpy>=1.0.1 (from matplotlib->nnunetv2)
  Using cached contourpy-1.3.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (5.5 kB)
Collecting cycler>=0.10 (from matplotlib->nnunetv2)
  Using cached cycler-0.12.1-py3-none-any.whl.metadata (3.8 kB)
Collecting fonttools>=4.22.0 (from matplotlib->nnunetv2)
  Using cached fonttools-4.58.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (106 kB)
Collecting kiwisolver>=1.3.1 (from matplotlib->nnunetv2)
  Using cached kiwisolver-1.4.8-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (6.2 kB)
Collecting pyparsing>=2.3.1 (from matplotlib->nnunetv2)
  Using cached pyparsing-3.2.3-py3-none-any.whl.metadata (5.0 kB)
Collecting python-dateutil>=2.7 (from matplotlib->nnunetv2)
  Using cached python_dateutil-2.9.0.post0-py2.py3-none-any.whl.metadata (8.4 kB)
Collecting six>=1.5 (from python-dateutil>=2.7->matplotlib->nnunetv2)
  Using cached six-1.17.0-py2.py3-none-any.whl.metadata (1.7 kB)
Collecting importlib-resources>=5.12 (from nibabel->nnunetv2)
  Using cached importlib_resources-6.5.2-py3-none-any.whl.metadata (3.9 kB)
Collecting pytz>=2020.1 (from pandas->nnunetv2)
  Using cached pytz-2025.2-py2.py3-none-any.whl.metadata (22 kB)
Collecting tzdata>=2022.7 (from pandas->nnunetv2)
  Using cached tzdata-2025.2-py2.py3-none-any.whl.metadata (1.4 kB)
Collecting charset-normalizer<4,>=2 (from requests->nnunetv2)
  Using cached charset_normalizer-3.4.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (35 kB)
Collecting idna<4,>=2.5 (from requests->nnunetv2)
  Using cached idna-3.10-py3-none-any.whl.metadata (10 kB)
Collecting urllib3<3,>=1.21.1 (from requests->nnunetv2)
  Using cached urllib3-2.4.0-py3-none-any.whl.metadata (6.5 kB)
Collecting certifi>=2017.4.17 (from requests->nnunetv2)
  Using cached certifi-2025.4.26-py3-none-any.whl.metadata (2.5 kB)
Collecting joblib>=1.2.0 (from scikit-learn->nnunetv2)
  Using cached joblib-1.5.1-py3-none-any.whl.metadata (5.6 kB)
Collecting torchvision (from timm->dynamic-network-architectures<0.5,>=0.4.1->nnunetv2)
  Using cached torchvision-0.22.0-cp311-cp311-manylinux_2_28_x86_64.whl.metadata (6.1 kB)
Collecting pyyaml (from timm->dynamic-network-architectures<0.5,>=0.4.1->nnunetv2)
  Using cached PyYAML-6.0.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (2.1 kB)
Collecting huggingface_hub (from timm->dynamic-network-architectures<0.5,>=0.4.1->nnunetv2)
  Using cached huggingface_hub-0.32.4-py3-none-any.whl.metadata (14 kB)
Collecting safetensors (from timm->dynamic-network-architectures<0.5,>=0.4.1->nnunetv2)
  Using cached safetensors-0.5.3-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.8 kB)
Collecting hf-xet<2.0.0,>=1.1.2 (from huggingface_hub->timm->dynamic-network-architectures<0.5,>=0.4.1->nnunetv2)
  Using cached hf_xet-1.1.3-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (879 bytes)
Collecting argparse (from unittest2->batchgenerators>=0.25.1->nnunetv2)
  Using cached argparse-1.4.0-py2.py3-none-any.whl.metadata (2.8 kB)
Collecting traceback2 (from unittest2->batchgenerators>=0.25.1->nnunetv2)
  Using cached traceback2-1.4.0-py2.py3-none-any.whl.metadata (1.5 kB)
Collecting linecache2 (from traceback2->unittest2->batchgenerators>=0.25.1->nnunetv2)
  Using cached linecache2-1.0.0-py2.py3-none-any.whl.metadata (1000 bytes)
Using cached blosc2-3.3.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.4 MB)
Using cached numpy-2.2.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (16.8 MB)
Using cached pillow-11.2.1-cp311-cp311-manylinux_2_28_x86_64.whl (4.6 MB)
Using cached scikit_image-0.25.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (14.8 MB)
Using cached imageio-2.37.0-py3-none-any.whl (315 kB)
Using cached lazy_loader-0.4-py3-none-any.whl (12 kB)
Using cached networkx-3.5-py3-none-any.whl (2.0 MB)
Using cached packaging-25.0-py3-none-any.whl (66 kB)
Using cached scipy-1.15.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (37.7 MB)
Using cached simpleitk-2.5.0-cp311-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (52.6 MB)
Using cached tifffile-2025.6.1-py3-none-any.whl (230 kB)
Using cached torch-2.7.0-cp311-cp311-manylinux_2_28_x86_64.whl (865.2 MB)
Using cached nvidia_cublas_cu12-12.6.4.1-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (393.1 MB)
Using cached nvidia_cuda_cupti_cu12-12.6.80-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (8.9 MB)
Using cached nvidia_cuda_nvrtc_cu12-12.6.77-py3-none-manylinux2014_x86_64.whl (23.7 MB)
Using cached nvidia_cuda_runtime_cu12-12.6.77-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (897 kB)
Using cached nvidia_cudnn_cu12-9.5.1.17-py3-none-manylinux_2_28_x86_64.whl (571.0 MB)
Using cached nvidia_cufft_cu12-11.3.0.4-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (200.2 MB)
Using cached nvidia_cufile_cu12-1.11.1.6-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (1.1 MB)
Using cached nvidia_curand_cu12-10.3.7.77-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (56.3 MB)
Using cached nvidia_cusolver_cu12-11.7.1.2-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (158.2 MB)
Using cached nvidia_cusparse_cu12-12.5.4.2-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (216.6 MB)
Using cached nvidia_cusparselt_cu12-0.6.3-py3-none-manylinux2014_x86_64.whl (156.8 MB)
Using cached nvidia_nccl_cu12-2.26.2-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (201.3 MB)
Using cached nvidia_nvjitlink_cu12-12.6.85-py3-none-manylinux2010_x86_64.manylinux_2_12_x86_64.whl (19.7 MB)
Using cached nvidia_nvtx_cu12-12.6.77-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (89 kB)
Using cached triton-3.3.0-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (156.5 MB)
Using cached sympy-1.14.0-py3-none-any.whl (6.3 MB)
Using cached mpmath-1.3.0-py3-none-any.whl (536 kB)
Using cached typing_extensions-4.14.0-py3-none-any.whl (43 kB)
Using cached connected_components_3d-3.23.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.3 MB)
Using cached einops-0.8.1-py3-none-any.whl (64 kB)
Using cached fft_conv_pytorch-1.2.0-py3-none-any.whl (6.8 kB)
Using cached filelock-3.18.0-py3-none-any.whl (16 kB)
Using cached fsspec-2025.5.1-py3-none-any.whl (199 kB)
Using cached future-1.0.0-py3-none-any.whl (491 kB)
Using cached graphviz-0.20.3-py3-none-any.whl (47 kB)
Using cached imagecodecs-2025.3.30-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (45.6 MB)
Using cached jinja2-3.1.6-py3-none-any.whl (134 kB)
Using cached MarkupSafe-3.0.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (23 kB)
Using cached matplotlib-3.10.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (8.6 MB)
Using cached contourpy-1.3.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (326 kB)
Using cached cycler-0.12.1-py3-none-any.whl (8.3 kB)
Using cached fonttools-4.58.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.9 MB)
Using cached kiwisolver-1.4.8-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.4 MB)
Using cached pyparsing-3.2.3-py3-none-any.whl (111 kB)
Using cached python_dateutil-2.9.0.post0-py2.py3-none-any.whl (229 kB)
Using cached six-1.17.0-py2.py3-none-any.whl (11 kB)
Using cached msgpack-1.1.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (403 kB)
Using cached ndindex-1.10.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (528 kB)
Using cached nibabel-5.3.2-py3-none-any.whl (3.3 MB)
Using cached importlib_resources-6.5.2-py3-none-any.whl (37 kB)
Using cached numexpr-2.10.2-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (398 kB)
Using cached pandas-2.2.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (13.1 MB)
Using cached pytz-2025.2-py2.py3-none-any.whl (509 kB)
Using cached tzdata-2025.2-py2.py3-none-any.whl (347 kB)
Using cached platformdirs-4.3.8-py3-none-any.whl (18 kB)
Using cached py_cpuinfo-9.0.0-py3-none-any.whl (22 kB)
Using cached requests-2.32.3-py3-none-any.whl (64 kB)
Using cached charset_normalizer-3.4.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (147 kB)
Using cached idna-3.10-py3-none-any.whl (70 kB)
Using cached urllib3-2.4.0-py3-none-any.whl (128 kB)
Using cached certifi-2025.4.26-py3-none-any.whl (159 kB)
Using cached scikit_learn-1.6.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (13.5 MB)
Using cached joblib-1.5.1-py3-none-any.whl (307 kB)
Using cached threadpoolctl-3.6.0-py3-none-any.whl (18 kB)
Using cached seaborn-0.13.2-py3-none-any.whl (294 kB)
Using cached timm-1.0.15-py3-none-any.whl (2.4 MB)
Using cached huggingface_hub-0.32.4-py3-none-any.whl (512 kB)
Using cached hf_xet-1.1.3-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.8 MB)
Using cached PyYAML-6.0.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (762 kB)
Using cached tqdm-4.67.1-py3-none-any.whl (78 kB)
Using cached safetensors-0.5.3-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (471 kB)
Using cached torchvision-0.22.0-cp311-cp311-manylinux_2_28_x86_64.whl (7.4 MB)
Using cached unittest2-1.1.0-py2.py3-none-any.whl (96 kB)
Using cached argparse-1.4.0-py2.py3-none-any.whl (23 kB)
Using cached traceback2-1.4.0-py2.py3-none-any.whl (16 kB)
Using cached linecache2-1.0.0-py2.py3-none-any.whl (12 kB)
Using cached yacs-0.1.8-py3-none-any.whl (14 kB)
Installing collected packages: SimpleITK, pytz, py-cpuinfo, nvidia-cusparselt-cu12, mpmath, linecache2, argparse, urllib3, tzdata, typing-extensions, triton, traceback2, tqdm, threadpoolctl, sympy, six, safetensors, pyyaml, pyparsing, platformdirs, pillow, packaging, nvidia-nvtx-cu12, nvidia-nvjitlink-cu12, nvidia-nccl-cu12, nvidia-curand-cu12, nvidia-cufile-cu12, nvidia-cuda-runtime-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-cupti-cu12, nvidia-cublas-cu12, numpy, networkx, ndindex, msgpack, MarkupSafe, kiwisolver, joblib, importlib-resources, idna, hf-xet, graphviz, future, fsspec, fonttools, filelock, einops, cycler, charset-normalizer, certifi, yacs, unittest2, tifffile, scipy, requests, python-dateutil, nvidia-cusparse-cu12, nvidia-cufft-cu12, nvidia-cudnn-cu12, numexpr, nibabel, lazy-loader, jinja2, imageio, imagecodecs, contourpy, connected-components-3d, scikit-learn, scikit-image, pandas, nvidia-cusolver-cu12, matplotlib, huggingface_hub, blosc2, torch, seaborn, batchgenerators, torchvision, fft-conv-pytorch, acvl-utils, timm, batchgeneratorsv2, dynamic-network-architectures, nnunetv2
Successfully installed MarkupSafe-3.0.2 SimpleITK-2.5.0 acvl-utils-0.2.5 argparse-1.4.0 batchgenerators-0.25.1 batchgeneratorsv2-0.3.0 blosc2-3.3.4 certifi-2025.4.26 charset-normalizer-3.4.2 connected-components-3d-3.23.0 contourpy-1.3.2 cycler-0.12.1 dynamic-network-architectures-0.4.1 einops-0.8.1 fft-conv-pytorch-1.2.0 filelock-3.18.0 fonttools-4.58.1 fsspec-2025.5.1 future-1.0.0 graphviz-0.20.3 hf-xet-1.1.3 huggingface_hub-0.32.4 idna-3.10 imagecodecs-2025.3.30 imageio-2.37.0 importlib-resources-6.5.2 jinja2-3.1.6 joblib-1.5.1 kiwisolver-1.4.8 lazy-loader-0.4 linecache2-1.0.0 matplotlib-3.10.3 mpmath-1.3.0 msgpack-1.1.0 ndindex-1.10.0 networkx-3.5 nibabel-5.3.2 nnunetv2-2.6.2 numexpr-2.10.2 numpy-2.2.6 nvidia-cublas-cu12-12.6.4.1 nvidia-cuda-cupti-cu12-12.6.80 nvidia-cuda-nvrtc-cu12-12.6.77 nvidia-cuda-runtime-cu12-12.6.77 nvidia-cudnn-cu12-9.5.1.17 nvidia-cufft-cu12-11.3.0.4 nvidia-cufile-cu12-1.11.1.6 nvidia-curand-cu12-10.3.7.77 nvidia-cusolver-cu12-11.7.1.2 nvidia-cusparse-cu12-12.5.4.2 nvidia-cusparselt-cu12-0.6.3 nvidia-nccl-cu12-2.26.2 nvidia-nvjitlink-cu12-12.6.85 nvidia-nvtx-cu12-12.6.77 packaging-25.0 pandas-2.2.3 pillow-11.2.1 platformdirs-4.3.8 py-cpuinfo-9.0.0 pyparsing-3.2.3 python-dateutil-2.9.0.post0 pytz-2025.2 pyyaml-6.0.2 requests-2.32.3 safetensors-0.5.3 scikit-image-0.25.2 scikit-learn-1.6.1 scipy-1.15.3 seaborn-0.13.2 six-1.17.0 sympy-1.14.0 threadpoolctl-3.6.0 tifffile-2025.6.1 timm-1.0.15 torch-2.7.0 torchvision-0.22.0 tqdm-4.67.1 traceback2-1.4.0 triton-3.3.0 typing-extensions-4.14.0 tzdata-2025.2 unittest2-1.1.0 urllib3-2.4.0 yacs-0.1.8
(deleteme) [vmiller@gluskap ~]$ nnUNetv2_train 4 3d_fullres 0 -tr nnUNetTrainer_5epochs

############################
INFO: You are using the old nnU-Net default plans. We have updated our recommendations. Please consider using those instead! Read more here: https://github.com/MIC-DKFZ/nnUNet/blob/master/documentation/resenc_presets.md
############################

Using device: cuda:0

#######################################################################
Please cite the following paper when using nnU-Net:
Isensee, F., Jaeger, P. F., Kohl, S. A., Petersen, J., & Maier-Hein, K. H. (2021). nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nature methods, 18(2), 203-211.
#######################################################################

2025-06-04 10:49:18.916780: Using torch.compile...
2025-06-04 10:49:19.651716: do_dummy_2d_data_aug: False
2025-06-04 10:49:19.652762: Using splits from existing split file: /home/vmiller/work/nnUNet/nnUNet_preprocessed/Dataset004_Hippocampus/splits_final.json
2025-06-04 10:49:19.653389: The split file contains 5 splits.
2025-06-04 10:49:19.653774: Desired fold for training: 0
2025-06-04 10:49:19.654169: This split has 208 training and 52 validation cases.
using pin_memory on device 0
Exception in thread Thread-2 (results_loop):
Traceback (most recent call last):
  File "/home/vmiller/miniforge/envs/deleteme/lib/python3.11/threading.py", line 1045, in _bootstrap_inner
Traceback (most recent call last):
  File "/home/vmiller/miniforge/envs/deleteme/bin/nnUNetv2_train", line 8, in <module>
    sys.exit(run_training_entry())
             ^^^^^^^^^^^^^^^^^^^^
  File "/home/vmiller/miniforge/envs/deleteme/lib/python3.11/site-packages/nnunetv2/run/run_training.py", line 266, in run_training_entry
    self.run()
  File "/home/vmiller/miniforge/envs/deleteme/lib/python3.11/threading.py", line 982, in run
    run_training(args.dataset_name_or_id, args.configuration, args.fold, args.tr, args.p, args.pretrained_weights,
  File "/home/vmiller/miniforge/envs/deleteme/lib/python3.11/site-packages/nnunetv2/run/run_training.py", line 207, in run_training
    self._target(*self._args, **self._kwargs)
  File "/home/vmiller/miniforge/envs/deleteme/lib/python3.11/site-packages/batchgenerators/dataloading/nondet_multi_threaded_augmenter.py", line 125, in results_loop
    nnunet_trainer.run_training()
  File "/home/vmiller/miniforge/envs/deleteme/lib/python3.11/site-packages/nnunetv2/training/nnUNetTrainer/nnUNetTrainer.py", line 1363, in run_training
    raise e
  File "/home/vmiller/miniforge/envs/deleteme/lib/python3.11/site-packages/batchgenerators/dataloading/nondet_multi_threaded_augmenter.py", line 103, in results_loop
    self.on_train_start()
  File "/home/vmiller/miniforge/envs/deleteme/lib/python3.11/site-packages/nnunetv2/training/nnUNetTrainer/nnUNetTrainer.py", line 900, in on_train_start
    raise RuntimeError("One or more background workers are no longer alive. Exiting. Please check the "
RuntimeError: One or more background workers are no longer alive. Exiting. Please check the print statements above for the actual error message
    self.dataloader_train, self.dataloader_val = self.get_dataloaders()
                                                 ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/vmiller/miniforge/envs/deleteme/lib/python3.11/site-packages/nnunetv2/training/nnUNetTrainer/nnUNetTrainer.py", line 690, in get_dataloaders
    _ = next(mt_gen_train)
        ^^^^^^^^^^^^^^^^^^
  File "/home/vmiller/miniforge/envs/deleteme/lib/python3.11/site-packages/batchgenerators/dataloading/nondet_multi_threaded_augmenter.py", line 196, in __next__
    item = self.__get_next_item()
           ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/vmiller/miniforge/envs/deleteme/lib/python3.11/site-packages/batchgenerators/dataloading/nondet_multi_threaded_augmenter.py", line 181, in __get_next_item
    raise RuntimeError("One or more background workers are no longer alive. Exiting. Please check the "
RuntimeError: One or more background workers are no longer alive. Exiting. Please check the print statements above for the actual error message
(deleteme) [vmiller@gluskap ~]$ export nnUNet_n_proc_DA=0
(deleteme) [vmiller@gluskap ~]$ nnUNetv2_train 4 3d_fullres 0 -tr nnUNetTrainer_5epochs

############################
INFO: You are using the old nnU-Net default plans. We have updated our recommendations. Please consider using those instead! Read more here: https://github.com/MIC-DKFZ/nnUNet/blob/master/documentation/resenc_presets.md
############################

Using device: cuda:0

#######################################################################
Please cite the following paper when using nnU-Net:
Isensee, F., Jaeger, P. F., Kohl, S. A., Petersen, J., & Maier-Hein, K. H. (2021). nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nature methods, 18(2), 203-211.
#######################################################################

2025-06-04 10:49:35.872346: Using torch.compile...
2025-06-04 10:49:36.605337: do_dummy_2d_data_aug: False
2025-06-04 10:49:36.606412: Using splits from existing split file: /home/vmiller/work/nnUNet/nnUNet_preprocessed/Dataset004_Hippocampus/splits_final.json
2025-06-04 10:49:36.607007: The split file contains 5 splits.
2025-06-04 10:49:36.607394: Desired fold for training: 0
2025-06-04 10:49:36.607776: This split has 208 training and 52 validation cases.

This is the configuration used by this training:
Configuration name: 3d_fullres
 {'data_identifier': 'nnUNetPlans_3d_fullres', 'preprocessor_name': 'DefaultPreprocessor', 'batch_size': 9, 'patch_size': [40, 56, 40], 'median_image_size_in_voxels': [36.0, 50.0, 35.0], 'spacing': [1.0, 1.0, 1.0], 'normalization_schemes': ['ZScoreNormalization'], 'use_mask_for_norm': [False], 'resampling_fn_data': 'resample_data_or_seg_to_shape', 'resampling_fn_seg': 'resample_data_or_seg_to_shape', 'resampling_fn_data_kwargs': {'is_seg': False, 'order': 3, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_seg_kwargs': {'is_seg': True, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_probabilities': 'resample_data_or_seg_to_shape', 'resampling_fn_probabilities_kwargs': {'is_seg': False, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'architecture': {'network_class_name': 'dynamic_network_architectures.architectures.unet.PlainConvUNet', 'arch_kwargs': {'n_stages': 4, 'features_per_stage': [32, 64, 128, 256], 'conv_op': 'torch.nn.modules.conv.Conv3d', 'kernel_sizes': [[3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3]], 'strides': [[1, 1, 1], [2, 2, 2], [2, 2, 2], [2, 2, 2]], 'n_conv_per_stage': [2, 2, 2, 2], 'n_conv_per_stage_decoder': [2, 2, 2], 'conv_bias': True, 'norm_op': 'torch.nn.modules.instancenorm.InstanceNorm3d', 'norm_op_kwargs': {'eps': 1e-05, 'affine': True}, 'dropout_op': None, 'dropout_op_kwargs': None, 'nonlin': 'torch.nn.LeakyReLU', 'nonlin_kwargs': {'inplace': True}}, '_kw_requires_import': ['conv_op', 'norm_op', 'dropout_op', 'nonlin']}, 'batch_dice': False} 

These are the global plan.json settings:
 {'dataset_name': 'Dataset004_Hippocampus', 'plans_name': 'nnUNetPlans', 'original_median_spacing_after_transp': [1.0, 1.0, 1.0], 'original_median_shape_after_transp': [36, 50, 35], 'image_reader_writer': 'SimpleITKIO', 'transpose_forward': [0, 1, 2], 'transpose_backward': [0, 1, 2], 'experiment_planner_used': 'ExperimentPlanner', 'label_manager': 'LabelManager', 'foreground_intensity_properties_per_channel': {'0': {'max': 486420.21875, 'mean': 22360.326171875, 'median': 362.88250732421875, 'min': 0.0, 'percentile_00_5': 28.0, 'percentile_99_5': 277682.03125, 'std': 60656.1328125}}} 

2025-06-04 10:49:36.768134: Unable to plot network architecture: nnUNet_compile is enabled!
2025-06-04 10:49:36.774000: 
2025-06-04 10:49:36.774906: Epoch 0
2025-06-04 10:49:36.775938: Current learning rate: 0.01
/home/vmiller/miniforge/envs/deleteme/lib/python3.11/site-packages/torch/_inductor/lowering.py:7007: UserWarning: 
Online softmax is disabled on the fly since Inductor decides to
split the reduction. Cut an issue to PyTorch if this is an
important use case and you want to speed it up with online
softmax.

  warnings.warn(
/home/vmiller/miniforge/envs/deleteme/lib/python3.11/site-packages/torch/_inductor/lowering.py:7007: UserWarning: 
Online softmax is disabled on the fly since Inductor decides to
split the reduction. Cut an issue to PyTorch if this is an
important use case and you want to speed it up with online
softmax.

  warnings.warn(
/home/vmiller/miniforge/envs/deleteme/lib/python3.11/site-packages/torch/_inductor/lowering.py:7007: UserWarning: 
Online softmax is disabled on the fly since Inductor decides to
split the reduction. Cut an issue to PyTorch if this is an
important use case and you want to speed it up with online
softmax.

  warnings.warn(
/home/vmiller/miniforge/envs/deleteme/lib/python3.11/site-packages/torch/_inductor/lowering.py:7007: UserWarning: 
Online softmax is disabled on the fly since Inductor decides to
split the reduction. Cut an issue to PyTorch if this is an
important use case and you want to speed it up with online
softmax.

  warnings.warn(
2025-06-04 10:50:20.371095: train_loss -0.1684
2025-06-04 10:50:20.372303: val_loss -0.4649
2025-06-04 10:50:20.373277: Pseudo dice [np.float32(0.2994), np.float32(0.5783)]
2025-06-04 10:50:20.374202: Epoch time: 43.6 s
2025-06-04 10:50:20.375126: Yayy! New best EMA pseudo Dice: 0.43880000710487366
2025-06-04 10:50:20.812908: 
2025-06-04 10:50:20.814204: Epoch 1
2025-06-04 10:50:20.815250: Current learning rate: 0.00818
2025-06-04 10:50:45.103410: train_loss -0.5007
2025-06-04 10:50:45.104746: val_loss -0.6208
2025-06-04 10:50:45.105758: Pseudo dice [np.float32(0.7992), np.float32(0.8046)]
2025-06-04 10:50:45.106790: Epoch time: 24.29 s
2025-06-04 10:50:45.107833: Yayy! New best EMA pseudo Dice: 0.47510001063346863
2025-06-04 10:50:45.632715: 
2025-06-04 10:50:45.634073: Epoch 2
2025-06-04 10:50:45.635162: Current learning rate: 0.00631
2025-06-04 10:51:10.663007: train_loss -0.7522
2025-06-04 10:51:10.664441: val_loss -0.7974
2025-06-04 10:51:10.665508: Pseudo dice [np.float32(0.8578), np.float32(0.8412)]
2025-06-04 10:51:10.666567: Epoch time: 25.03 s
2025-06-04 10:51:10.667600: Yayy! New best EMA pseudo Dice: 0.5126000046730042
2025-06-04 10:51:11.203852: 
2025-06-04 10:51:11.205230: Epoch 3
2025-06-04 10:51:11.206355: Current learning rate: 0.00438
2025-06-04 10:51:34.328071: train_loss -0.8044
2025-06-04 10:51:34.329611: val_loss -0.8201
2025-06-04 10:51:34.330728: Pseudo dice [np.float32(0.8756), np.float32(0.8564)]
2025-06-04 10:51:34.331819: Epoch time: 23.12 s
2025-06-04 10:51:34.332887: Yayy! New best EMA pseudo Dice: 0.5479000210762024
2025-06-04 10:51:34.866735: 
2025-06-04 10:51:34.868215: Epoch 4
2025-06-04 10:51:34.869401: Current learning rate: 0.00235
2025-06-04 10:51:58.833467: train_loss -0.8142
2025-06-04 10:51:58.834750: val_loss -0.8189
2025-06-04 10:51:58.835874: Pseudo dice [np.float32(0.873), np.float32(0.8553)]
2025-06-04 10:51:58.836967: Epoch time: 23.97 s
2025-06-04 10:51:58.838062: Yayy! New best EMA pseudo Dice: 0.5795000195503235
2025-06-04 10:51:59.433245: Training done.
2025-06-04 10:51:59.444639: Using splits from existing split file: /home/vmiller/work/nnUNet/nnUNet_preprocessed/Dataset004_Hippocampus/splits_final.json
2025-06-04 10:51:59.446637: The split file contains 5 splits.
2025-06-04 10:51:59.448005: Desired fold for training: 0
2025-06-04 10:51:59.449369: This split has 208 training and 52 validation cases.
2025-06-04 10:51:59.450831: predicting hippocampus_017
2025-06-04 10:51:59.453359: hippocampus_017, shape torch.Size([1, 32, 48, 35]), rank 0
2025-06-04 10:52:05.961305: predicting hippocampus_019
2025-06-04 10:52:05.965193: hippocampus_019, shape torch.Size([1, 41, 47, 36]), rank 0
2025-06-04 10:52:05.983614: predicting hippocampus_033
2025-06-04 10:52:05.986576: hippocampus_033, shape torch.Size([1, 38, 48, 33]), rank 0
2025-06-04 10:52:05.998419: predicting hippocampus_035
2025-06-04 10:52:06.001951: hippocampus_035, shape torch.Size([1, 37, 47, 35]), rank 0
2025-06-04 10:52:06.013288: predicting hippocampus_037
2025-06-04 10:52:06.015774: hippocampus_037, shape torch.Size([1, 32, 51, 34]), rank 0
2025-06-04 10:52:06.026140: predicting hippocampus_049
2025-06-04 10:52:06.029375: hippocampus_049, shape torch.Size([1, 36, 51, 35]), rank 0
2025-06-04 10:52:06.039478: predicting hippocampus_052
2025-06-04 10:52:06.042116: hippocampus_052, shape torch.Size([1, 40, 52, 34]), rank 0
2025-06-04 10:52:06.052612: predicting hippocampus_065
2025-06-04 10:52:06.055048: hippocampus_065, shape torch.Size([1, 37, 52, 39]), rank 0
2025-06-04 10:52:06.065246: predicting hippocampus_083
2025-06-04 10:52:06.068183: hippocampus_083, shape torch.Size([1, 37, 52, 33]), rank 0
2025-06-04 10:52:06.078451: predicting hippocampus_088
2025-06-04 10:52:06.081018: hippocampus_088, shape torch.Size([1, 35, 52, 40]), rank 0
2025-06-04 10:52:06.091355: predicting hippocampus_090
2025-06-04 10:52:06.094522: hippocampus_090, shape torch.Size([1, 40, 50, 37]), rank 0
2025-06-04 10:52:06.104609: predicting hippocampus_092
2025-06-04 10:52:06.106866: hippocampus_092, shape torch.Size([1, 28, 49, 38]), rank 0
2025-06-04 10:52:06.117354: predicting hippocampus_095
2025-06-04 10:52:06.119606: hippocampus_095, shape torch.Size([1, 39, 49, 34]), rank 0
2025-06-04 10:52:06.130261: predicting hippocampus_107
2025-06-04 10:52:06.133089: hippocampus_107, shape torch.Size([1, 34, 55, 35]), rank 0
2025-06-04 10:52:06.143505: predicting hippocampus_108
2025-06-04 10:52:06.146076: hippocampus_108, shape torch.Size([1, 37, 53, 36]), rank 0
2025-06-04 10:52:06.156420: predicting hippocampus_123
2025-06-04 10:52:06.158747: hippocampus_123, shape torch.Size([1, 38, 53, 32]), rank 0
2025-06-04 10:52:06.168914: predicting hippocampus_125
2025-06-04 10:52:06.171655: hippocampus_125, shape torch.Size([1, 39, 42, 43]), rank 0
2025-06-04 10:52:06.188283: predicting hippocampus_157
2025-06-04 10:52:06.190894: hippocampus_157, shape torch.Size([1, 35, 51, 36]), rank 0
2025-06-04 10:52:06.201532: predicting hippocampus_164
2025-06-04 10:52:06.204942: hippocampus_164, shape torch.Size([1, 47, 48, 41]), rank 0
2025-06-04 10:52:06.233712: predicting hippocampus_169
2025-06-04 10:52:06.237333: hippocampus_169, shape torch.Size([1, 39, 45, 36]), rank 0
2025-06-04 10:52:06.247517: predicting hippocampus_175
2025-06-04 10:52:06.249694: hippocampus_175, shape torch.Size([1, 35, 47, 33]), rank 0
2025-06-04 10:52:06.260129: predicting hippocampus_185
2025-06-04 10:52:06.262694: hippocampus_185, shape torch.Size([1, 33, 49, 35]), rank 0
2025-06-04 10:52:06.274180: predicting hippocampus_190
2025-06-04 10:52:06.277628: hippocampus_190, shape torch.Size([1, 30, 52, 37]), rank 0
2025-06-04 10:52:06.289293: predicting hippocampus_194
2025-06-04 10:52:06.292321: hippocampus_194, shape torch.Size([1, 30, 50, 35]), rank 0
2025-06-04 10:52:06.302555: predicting hippocampus_204
2025-06-04 10:52:06.305261: hippocampus_204, shape torch.Size([1, 39, 48, 36]), rank 0
2025-06-04 10:52:06.315088: predicting hippocampus_205
2025-06-04 10:52:06.317576: hippocampus_205, shape torch.Size([1, 32, 47, 32]), rank 0
2025-06-04 10:52:06.328625: predicting hippocampus_210
2025-06-04 10:52:06.331581: hippocampus_210, shape torch.Size([1, 40, 48, 34]), rank 0
2025-06-04 10:52:06.341948: predicting hippocampus_217
2025-06-04 10:52:06.344661: hippocampus_217, shape torch.Size([1, 27, 53, 38]), rank 0
2025-06-04 10:52:06.354508: predicting hippocampus_219
2025-06-04 10:52:06.357060: hippocampus_219, shape torch.Size([1, 39, 45, 37]), rank 0
2025-06-04 10:52:06.366832: predicting hippocampus_229
2025-06-04 10:52:06.369218: hippocampus_229, shape torch.Size([1, 35, 50, 33]), rank 0
2025-06-04 10:52:06.379380: predicting hippocampus_244
2025-06-04 10:52:06.381549: hippocampus_244, shape torch.Size([1, 30, 53, 38]), rank 0
2025-06-04 10:52:06.391540: predicting hippocampus_261
2025-06-04 10:52:06.394794: hippocampus_261, shape torch.Size([1, 33, 58, 36]), rank 0
2025-06-04 10:52:06.410742: predicting hippocampus_264
2025-06-04 10:52:06.413101: hippocampus_264, shape torch.Size([1, 37, 51, 38]), rank 0
2025-06-04 10:52:06.422996: predicting hippocampus_277
2025-06-04 10:52:06.425936: hippocampus_277, shape torch.Size([1, 29, 59, 33]), rank 0
2025-06-04 10:52:06.441795: predicting hippocampus_280
2025-06-04 10:52:06.444348: hippocampus_280, shape torch.Size([1, 32, 47, 37]), rank 0
2025-06-04 10:52:06.454041: predicting hippocampus_286
2025-06-04 10:52:06.456577: hippocampus_286, shape torch.Size([1, 46, 45, 37]), rank 0
2025-06-04 10:52:06.472504: predicting hippocampus_288
2025-06-04 10:52:06.474812: hippocampus_288, shape torch.Size([1, 42, 50, 38]), rank 0
2025-06-04 10:52:06.491331: predicting hippocampus_289
2025-06-04 10:52:06.494736: hippocampus_289, shape torch.Size([1, 36, 49, 35]), rank 0
2025-06-04 10:52:06.506036: predicting hippocampus_296
2025-06-04 10:52:06.508827: hippocampus_296, shape torch.Size([1, 35, 54, 35]), rank 0
2025-06-04 10:52:06.519989: predicting hippocampus_305
2025-06-04 10:52:06.522864: hippocampus_305, shape torch.Size([1, 30, 49, 34]), rank 0
2025-06-04 10:52:06.534273: predicting hippocampus_308
2025-06-04 10:52:06.536858: hippocampus_308, shape torch.Size([1, 40, 48, 38]), rank 0
2025-06-04 10:52:06.546511: predicting hippocampus_317
2025-06-04 10:52:06.549072: hippocampus_317, shape torch.Size([1, 34, 51, 33]), rank 0
2025-06-04 10:52:06.559390: predicting hippocampus_327
2025-06-04 10:52:06.561532: hippocampus_327, shape torch.Size([1, 27, 54, 36]), rank 0
2025-06-04 10:52:06.571438: predicting hippocampus_330
2025-06-04 10:52:06.573514: hippocampus_330, shape torch.Size([1, 33, 55, 35]), rank 0
2025-06-04 10:52:06.583656: predicting hippocampus_332
2025-06-04 10:52:06.586283: hippocampus_332, shape torch.Size([1, 33, 52, 35]), rank 0
2025-06-04 10:52:06.596023: predicting hippocampus_338
2025-06-04 10:52:06.598215: hippocampus_338, shape torch.Size([1, 43, 43, 37]), rank 0
2025-06-04 10:52:06.614335: predicting hippocampus_349
2025-06-04 10:52:06.617100: hippocampus_349, shape torch.Size([1, 34, 50, 34]), rank 0
2025-06-04 10:52:06.627796: predicting hippocampus_350
2025-06-04 10:52:06.630797: hippocampus_350, shape torch.Size([1, 34, 49, 35]), rank 0
2025-06-04 10:52:06.640604: predicting hippocampus_356
2025-06-04 10:52:06.643083: hippocampus_356, shape torch.Size([1, 37, 51, 36]), rank 0
2025-06-04 10:52:06.652695: predicting hippocampus_358
2025-06-04 10:52:06.655776: hippocampus_358, shape torch.Size([1, 34, 50, 35]), rank 0
2025-06-04 10:52:06.665552: predicting hippocampus_374
2025-06-04 10:52:06.667946: hippocampus_374, shape torch.Size([1, 39, 48, 38]), rank 0
2025-06-04 10:52:06.682891: predicting hippocampus_394
2025-06-04 10:52:06.686135: hippocampus_394, shape torch.Size([1, 32, 52, 36]), rank 0
2025-06-04 10:52:09.742277: Validation complete
2025-06-04 10:52:09.743878: Mean Validation Dice:  0.8684989757703176

Secondary Machine:

[vmiller@goliath work]$ /home/vmiller/miniforge/bin/conda create -n deleteme python=3.11
Channels:
 - conda-forge
Platform: linux-64
Collecting package metadata (repodata.json): done
Solving environment: done


==> WARNING: A newer version of conda exists. <==
    current version: 25.3.0
    latest version: 25.5.0

Please update conda by running

    $ conda update -n base -c conda-forge conda



## Package Plan ##

  environment location: /home/vmiller/miniforge/envs/deleteme

  added / updated specs:
    - python=3.11


The following packages will be downloaded:

    package                    |            build
    ---------------------------|-----------------
    libsqlite-3.50.0           |       hee588c1_0         897 KB  conda-forge
    python-3.11.12             |h9e4cc4f_0_cpython        29.1 MB  conda-forge
    setuptools-80.9.0          |     pyhff2d567_0         731 KB  conda-forge
    tk-8.6.13                  |noxft_hd72426e_102         3.1 MB  conda-forge
    ------------------------------------------------------------
                                           Total:        33.9 MB

The following NEW packages will be INSTALLED:

  _libgcc_mutex      conda-forge/linux-64::_libgcc_mutex-0.1-conda_forge 
  _openmp_mutex      conda-forge/linux-64::_openmp_mutex-4.5-2_gnu 
  bzip2              conda-forge/linux-64::bzip2-1.0.8-h4bc722e_7 
  ca-certificates    conda-forge/noarch::ca-certificates-2025.4.26-hbd8a1cb_0 
  ld_impl_linux-64   conda-forge/linux-64::ld_impl_linux-64-2.43-h712a8e2_4 
  libexpat           conda-forge/linux-64::libexpat-2.7.0-h5888daf_0 
  libffi             conda-forge/linux-64::libffi-3.4.6-h2dba641_1 
  libgcc             conda-forge/linux-64::libgcc-15.1.0-h767d61c_2 
  libgcc-ng          conda-forge/linux-64::libgcc-ng-15.1.0-h69a702a_2 
  libgomp            conda-forge/linux-64::libgomp-15.1.0-h767d61c_2 
  liblzma            conda-forge/linux-64::liblzma-5.8.1-hb9d3cd8_1 
  libnsl             conda-forge/linux-64::libnsl-2.0.1-hd590300_0 
  libsqlite          conda-forge/linux-64::libsqlite-3.50.0-hee588c1_0 
  libuuid            conda-forge/linux-64::libuuid-2.38.1-h0b41bf4_0 
  libxcrypt          conda-forge/linux-64::libxcrypt-4.4.36-hd590300_1 
  libzlib            conda-forge/linux-64::libzlib-1.3.1-hb9d3cd8_2 
  ncurses            conda-forge/linux-64::ncurses-6.5-h2d0b736_3 
  openssl            conda-forge/linux-64::openssl-3.5.0-h7b32b05_1 
  pip                conda-forge/noarch::pip-25.1.1-pyh8b19718_0 
  python             conda-forge/linux-64::python-3.11.12-h9e4cc4f_0_cpython 
  readline           conda-forge/linux-64::readline-8.2-h8c095d6_2 
  setuptools         conda-forge/noarch::setuptools-80.9.0-pyhff2d567_0 
  tk                 conda-forge/linux-64::tk-8.6.13-noxft_hd72426e_102 
  tzdata             conda-forge/noarch::tzdata-2025b-h78e105d_0 
  wheel              conda-forge/noarch::wheel-0.45.1-pyhd8ed1ab_1 


Proceed ([y]/n)? y


Downloading and Extracting Packages:
                                                                                
Preparing transaction: done                                                     
Verifying transaction: done                                                     
Executing transaction: done                                                     
#
# To activate this environment, use
#
#     $ conda activate deleteme
#
# To deactivate an active environment, use
#
#     $ conda deactivate

[vmiller@goliath work]$ source ~/.bashrc
(base) [vmiller@goliath work]$ source /home/vmiller/miniforge/etc/profile.d/conda.sh conda activate deleteme
(deleteme) [vmiller@goliath work]$ pip install nnunetv2
Collecting nnunetv2
  Downloading nnunetv2-2.6.2.tar.gz (211 kB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Preparing metadata (pyproject.toml) ... done
Collecting torch>=2.1.2 (from nnunetv2)
  Downloading torch-2.7.0-cp311-cp311-manylinux_2_28_x86_64.whl.metadata (29 kB)
Collecting acvl-utils<0.3,>=0.2.3 (from nnunetv2)
  Downloading acvl_utils-0.2.5.tar.gz (29 kB)
  Preparing metadata (setup.py) ... done
Collecting dynamic-network-architectures<0.5,>=0.4.1 (from nnunetv2)
  Downloading dynamic_network_architectures-0.4.1.tar.gz (28 kB)
  Preparing metadata (setup.py) ... done
Collecting tqdm (from nnunetv2)
  Using cached tqdm-4.67.1-py3-none-any.whl.metadata (57 kB)
Collecting scipy (from nnunetv2)
  Downloading scipy-1.15.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (61 kB)
Collecting batchgenerators>=0.25.1 (from nnunetv2)
  Downloading batchgenerators-0.25.1.tar.gz (76 kB)
  Preparing metadata (setup.py) ... done
Collecting numpy>=1.24 (from nnunetv2)
  Downloading numpy-2.2.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (62 kB)
Collecting scikit-learn (from nnunetv2)
  Downloading scikit_learn-1.6.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (18 kB)
Collecting scikit-image>=0.19.3 (from nnunetv2)
  Downloading scikit_image-0.25.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (14 kB)
Collecting SimpleITK>=2.2.1 (from nnunetv2)
  Downloading simpleitk-2.5.0-cp311-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (7.2 kB)
Collecting pandas (from nnunetv2)
  Using cached pandas-2.2.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (89 kB)
Collecting graphviz (from nnunetv2)
  Downloading graphviz-0.20.3-py3-none-any.whl.metadata (12 kB)
Collecting tifffile (from nnunetv2)
  Downloading tifffile-2025.6.1-py3-none-any.whl.metadata (32 kB)
Collecting requests (from nnunetv2)
  Using cached requests-2.32.3-py3-none-any.whl.metadata (4.6 kB)
Collecting nibabel (from nnunetv2)
  Using cached nibabel-5.3.2-py3-none-any.whl.metadata (9.1 kB)
Collecting matplotlib (from nnunetv2)
  Downloading matplotlib-3.10.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB)
Collecting seaborn (from nnunetv2)
  Downloading seaborn-0.13.2-py3-none-any.whl.metadata (5.4 kB)
Collecting imagecodecs (from nnunetv2)
  Downloading imagecodecs-2025.3.30-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (20 kB)
Collecting yacs (from nnunetv2)
  Downloading yacs-0.1.8-py3-none-any.whl.metadata (639 bytes)
Collecting batchgeneratorsv2>=0.3.0 (from nnunetv2)
  Downloading batchgeneratorsv2-0.3.0.tar.gz (44 kB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Preparing metadata (pyproject.toml) ... done
Collecting einops (from nnunetv2)
  Downloading einops-0.8.1-py3-none-any.whl.metadata (13 kB)
Collecting blosc2>=3.0.0b1 (from nnunetv2)
  Downloading blosc2-3.3.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (6.9 kB)
Collecting connected-components-3d (from acvl-utils<0.3,>=0.2.3->nnunetv2)
  Downloading connected_components_3d-3.23.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (32 kB)
Collecting timm (from dynamic-network-architectures<0.5,>=0.4.1->nnunetv2)
  Downloading timm-1.0.15-py3-none-any.whl.metadata (52 kB)
Collecting pillow>=7.1.2 (from batchgenerators>=0.25.1->nnunetv2)
  Downloading pillow-11.2.1-cp311-cp311-manylinux_2_28_x86_64.whl.metadata (8.9 kB)
Collecting future (from batchgenerators>=0.25.1->nnunetv2)
  Downloading future-1.0.0-py3-none-any.whl.metadata (4.0 kB)
Collecting unittest2 (from batchgenerators>=0.25.1->nnunetv2)
  Downloading unittest2-1.1.0-py2.py3-none-any.whl.metadata (15 kB)
Collecting threadpoolctl (from batchgenerators>=0.25.1->nnunetv2)
  Downloading threadpoolctl-3.6.0-py3-none-any.whl.metadata (13 kB)
Collecting fft-conv-pytorch (from batchgeneratorsv2>=0.3.0->nnunetv2)
  Downloading fft_conv_pytorch-1.2.0-py3-none-any.whl.metadata (2.8 kB)
Collecting ndindex (from blosc2>=3.0.0b1->nnunetv2)
  Downloading ndindex-1.10.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.6 kB)
Collecting msgpack (from blosc2>=3.0.0b1->nnunetv2)
  Downloading msgpack-1.1.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (8.4 kB)
Collecting platformdirs (from blosc2>=3.0.0b1->nnunetv2)
  Downloading platformdirs-4.3.8-py3-none-any.whl.metadata (12 kB)
Collecting numexpr (from blosc2>=3.0.0b1->nnunetv2)
  Downloading numexpr-2.10.2-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl.metadata (8.1 kB)
Collecting py-cpuinfo (from blosc2>=3.0.0b1->nnunetv2)
  Downloading py_cpuinfo-9.0.0-py3-none-any.whl.metadata (794 bytes)
Collecting networkx>=3.0 (from scikit-image>=0.19.3->nnunetv2)
  Downloading networkx-3.5-py3-none-any.whl.metadata (6.3 kB)
Collecting imageio!=2.35.0,>=2.33 (from scikit-image>=0.19.3->nnunetv2)
  Downloading imageio-2.37.0-py3-none-any.whl.metadata (5.2 kB)
Collecting packaging>=21 (from scikit-image>=0.19.3->nnunetv2)
  Downloading packaging-25.0-py3-none-any.whl.metadata (3.3 kB)
Collecting lazy-loader>=0.4 (from scikit-image>=0.19.3->nnunetv2)
  Using cached lazy_loader-0.4-py3-none-any.whl.metadata (7.6 kB)
Collecting filelock (from torch>=2.1.2->nnunetv2)
  Downloading filelock-3.18.0-py3-none-any.whl.metadata (2.9 kB)
Collecting typing-extensions>=4.10.0 (from torch>=2.1.2->nnunetv2)
  Downloading typing_extensions-4.14.0-py3-none-any.whl.metadata (3.0 kB)
Collecting sympy>=1.13.3 (from torch>=2.1.2->nnunetv2)
  Downloading sympy-1.14.0-py3-none-any.whl.metadata (12 kB)
Collecting jinja2 (from torch>=2.1.2->nnunetv2)
  Downloading jinja2-3.1.6-py3-none-any.whl.metadata (2.9 kB)
Collecting fsspec (from torch>=2.1.2->nnunetv2)
  Downloading fsspec-2025.5.1-py3-none-any.whl.metadata (11 kB)
Collecting nvidia-cuda-nvrtc-cu12==12.6.77 (from torch>=2.1.2->nnunetv2)
  Downloading nvidia_cuda_nvrtc_cu12-12.6.77-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cuda-runtime-cu12==12.6.77 (from torch>=2.1.2->nnunetv2)
  Downloading nvidia_cuda_runtime_cu12-12.6.77-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cuda-cupti-cu12==12.6.80 (from torch>=2.1.2->nnunetv2)
  Downloading nvidia_cuda_cupti_cu12-12.6.80-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cudnn-cu12==9.5.1.17 (from torch>=2.1.2->nnunetv2)
  Downloading nvidia_cudnn_cu12-9.5.1.17-py3-none-manylinux_2_28_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cublas-cu12==12.6.4.1 (from torch>=2.1.2->nnunetv2)
  Downloading nvidia_cublas_cu12-12.6.4.1-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cufft-cu12==11.3.0.4 (from torch>=2.1.2->nnunetv2)
  Downloading nvidia_cufft_cu12-11.3.0.4-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-curand-cu12==10.3.7.77 (from torch>=2.1.2->nnunetv2)
  Downloading nvidia_curand_cu12-10.3.7.77-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cusolver-cu12==11.7.1.2 (from torch>=2.1.2->nnunetv2)
  Downloading nvidia_cusolver_cu12-11.7.1.2-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cusparse-cu12==12.5.4.2 (from torch>=2.1.2->nnunetv2)
  Downloading nvidia_cusparse_cu12-12.5.4.2-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cusparselt-cu12==0.6.3 (from torch>=2.1.2->nnunetv2)
  Downloading nvidia_cusparselt_cu12-0.6.3-py3-none-manylinux2014_x86_64.whl.metadata (6.8 kB)
Collecting nvidia-nccl-cu12==2.26.2 (from torch>=2.1.2->nnunetv2)
  Downloading nvidia_nccl_cu12-2.26.2-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (2.0 kB)
Collecting nvidia-nvtx-cu12==12.6.77 (from torch>=2.1.2->nnunetv2)
  Downloading nvidia_nvtx_cu12-12.6.77-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-nvjitlink-cu12==12.6.85 (from torch>=2.1.2->nnunetv2)
  Using cached nvidia_nvjitlink_cu12-12.6.85-py3-none-manylinux2010_x86_64.manylinux_2_12_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cufile-cu12==1.11.1.6 (from torch>=2.1.2->nnunetv2)
  Downloading nvidia_cufile_cu12-1.11.1.6-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.5 kB)
Collecting triton==3.3.0 (from torch>=2.1.2->nnunetv2)
  Downloading triton-3.3.0-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl.metadata (1.5 kB)
Requirement already satisfied: setuptools>=40.8.0 in /home/vmiller/miniforge/envs/deleteme/lib/python3.11/site-packages (from triton==3.3.0->torch>=2.1.2->nnunetv2) (80.9.0)
Collecting mpmath<1.4,>=1.1.0 (from sympy>=1.13.3->torch>=2.1.2->nnunetv2)
  Using cached mpmath-1.3.0-py3-none-any.whl.metadata (8.6 kB)
Collecting MarkupSafe>=2.0 (from jinja2->torch>=2.1.2->nnunetv2)
  Using cached MarkupSafe-3.0.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (4.0 kB)
Collecting contourpy>=1.0.1 (from matplotlib->nnunetv2)
  Downloading contourpy-1.3.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (5.5 kB)
Collecting cycler>=0.10 (from matplotlib->nnunetv2)
  Using cached cycler-0.12.1-py3-none-any.whl.metadata (3.8 kB)
Collecting fonttools>=4.22.0 (from matplotlib->nnunetv2)
  Downloading fonttools-4.58.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (106 kB)
Collecting kiwisolver>=1.3.1 (from matplotlib->nnunetv2)
  Downloading kiwisolver-1.4.8-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (6.2 kB)
Collecting pyparsing>=2.3.1 (from matplotlib->nnunetv2)
  Downloading pyparsing-3.2.3-py3-none-any.whl.metadata (5.0 kB)
Collecting python-dateutil>=2.7 (from matplotlib->nnunetv2)
  Using cached python_dateutil-2.9.0.post0-py2.py3-none-any.whl.metadata (8.4 kB)
Collecting six>=1.5 (from python-dateutil>=2.7->matplotlib->nnunetv2)
  Using cached six-1.17.0-py2.py3-none-any.whl.metadata (1.7 kB)
Collecting importlib-resources>=5.12 (from nibabel->nnunetv2)
  Downloading importlib_resources-6.5.2-py3-none-any.whl.metadata (3.9 kB)
Collecting pytz>=2020.1 (from pandas->nnunetv2)
  Downloading pytz-2025.2-py2.py3-none-any.whl.metadata (22 kB)
Collecting tzdata>=2022.7 (from pandas->nnunetv2)
  Downloading tzdata-2025.2-py2.py3-none-any.whl.metadata (1.4 kB)
Collecting charset-normalizer<4,>=2 (from requests->nnunetv2)
  Downloading charset_normalizer-3.4.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (35 kB)
Collecting idna<4,>=2.5 (from requests->nnunetv2)
  Using cached idna-3.10-py3-none-any.whl.metadata (10 kB)
Collecting urllib3<3,>=1.21.1 (from requests->nnunetv2)
  Downloading urllib3-2.4.0-py3-none-any.whl.metadata (6.5 kB)
Collecting certifi>=2017.4.17 (from requests->nnunetv2)
  Downloading certifi-2025.4.26-py3-none-any.whl.metadata (2.5 kB)
Collecting joblib>=1.2.0 (from scikit-learn->nnunetv2)
  Downloading joblib-1.5.1-py3-none-any.whl.metadata (5.6 kB)
Collecting torchvision (from timm->dynamic-network-architectures<0.5,>=0.4.1->nnunetv2)
  Downloading torchvision-0.22.0-cp311-cp311-manylinux_2_28_x86_64.whl.metadata (6.1 kB)
Collecting pyyaml (from timm->dynamic-network-architectures<0.5,>=0.4.1->nnunetv2)
  Using cached PyYAML-6.0.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (2.1 kB)
Collecting huggingface_hub (from timm->dynamic-network-architectures<0.5,>=0.4.1->nnunetv2)
  Downloading huggingface_hub-0.32.4-py3-none-any.whl.metadata (14 kB)
Collecting safetensors (from timm->dynamic-network-architectures<0.5,>=0.4.1->nnunetv2)
  Downloading safetensors-0.5.3-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.8 kB)
Collecting hf-xet<2.0.0,>=1.1.2 (from huggingface_hub->timm->dynamic-network-architectures<0.5,>=0.4.1->nnunetv2)
  Downloading hf_xet-1.1.3-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (879 bytes)
Collecting argparse (from unittest2->batchgenerators>=0.25.1->nnunetv2)
  Downloading argparse-1.4.0-py2.py3-none-any.whl.metadata (2.8 kB)
Collecting traceback2 (from unittest2->batchgenerators>=0.25.1->nnunetv2)
  Downloading traceback2-1.4.0-py2.py3-none-any.whl.metadata (1.5 kB)
Collecting linecache2 (from traceback2->unittest2->batchgenerators>=0.25.1->nnunetv2)
  Downloading linecache2-1.0.0-py2.py3-none-any.whl.metadata (1000 bytes)
Downloading blosc2-3.3.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.4 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.4/4.4 MB 10.1 MB/s eta 0:00:00
Downloading numpy-2.2.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (16.8 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 16.8/16.8 MB 13.9 MB/s eta 0:00:00
Downloading pillow-11.2.1-cp311-cp311-manylinux_2_28_x86_64.whl (4.6 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.6/4.6 MB 12.9 MB/s eta 0:00:00
Downloading scikit_image-0.25.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (14.8 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 14.8/14.8 MB 14.1 MB/s eta 0:00:00
Downloading imageio-2.37.0-py3-none-any.whl (315 kB)
Using cached lazy_loader-0.4-py3-none-any.whl (12 kB)
Downloading networkx-3.5-py3-none-any.whl (2.0 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.0/2.0 MB 16.2 MB/s eta 0:00:00
Downloading packaging-25.0-py3-none-any.whl (66 kB)
Downloading scipy-1.15.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (37.7 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 37.7/37.7 MB 14.8 MB/s eta 0:00:00
Downloading simpleitk-2.5.0-cp311-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (52.6 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 52.6/52.6 MB 14.2 MB/s eta 0:00:00
Downloading tifffile-2025.6.1-py3-none-any.whl (230 kB)
Downloading torch-2.7.0-cp311-cp311-manylinux_2_28_x86_64.whl (865.2 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 865.2/865.2 MB 12.4 MB/s eta 0:00:00
Downloading nvidia_cublas_cu12-12.6.4.1-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (393.1 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 393.1/393.1 MB 14.2 MB/s eta 0:00:00
Downloading nvidia_cuda_cupti_cu12-12.6.80-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (8.9 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 8.9/8.9 MB 15.8 MB/s eta 0:00:00
Downloading nvidia_cuda_nvrtc_cu12-12.6.77-py3-none-manylinux2014_x86_64.whl (23.7 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 23.7/23.7 MB 15.1 MB/s eta 0:00:00
Downloading nvidia_cuda_runtime_cu12-12.6.77-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (897 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 897.7/897.7 kB 13.7 MB/s eta 0:00:00
Downloading nvidia_cudnn_cu12-9.5.1.17-py3-none-manylinux_2_28_x86_64.whl (571.0 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 571.0/571.0 MB 14.3 MB/s eta 0:00:00
Downloading nvidia_cufft_cu12-11.3.0.4-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (200.2 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 200.2/200.2 MB 14.8 MB/s eta 0:00:00
Downloading nvidia_cufile_cu12-1.11.1.6-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (1.1 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 13.4 MB/s eta 0:00:00
Downloading nvidia_curand_cu12-10.3.7.77-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (56.3 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 56.3/56.3 MB 14.8 MB/s eta 0:00:00
Downloading nvidia_cusolver_cu12-11.7.1.2-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (158.2 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 158.2/158.2 MB 14.4 MB/s eta 0:00:00
Downloading nvidia_cusparse_cu12-12.5.4.2-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (216.6 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 216.6/216.6 MB 14.4 MB/s eta 0:00:00
Downloading nvidia_cusparselt_cu12-0.6.3-py3-none-manylinux2014_x86_64.whl (156.8 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 156.8/156.8 MB 14.5 MB/s eta 0:00:00
Downloading nvidia_nccl_cu12-2.26.2-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (201.3 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 201.3/201.3 MB 14.9 MB/s eta 0:00:00
Using cached nvidia_nvjitlink_cu12-12.6.85-py3-none-manylinux2010_x86_64.manylinux_2_12_x86_64.whl (19.7 MB)
Downloading nvidia_nvtx_cu12-12.6.77-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (89 kB)
Downloading triton-3.3.0-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (156.5 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 156.5/156.5 MB 14.5 MB/s eta 0:00:00
Downloading sympy-1.14.0-py3-none-any.whl (6.3 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.3/6.3 MB 17.2 MB/s eta 0:00:00
Using cached mpmath-1.3.0-py3-none-any.whl (536 kB)
Downloading typing_extensions-4.14.0-py3-none-any.whl (43 kB)
Downloading connected_components_3d-3.23.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.3 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.3/4.3 MB 15.6 MB/s eta 0:00:00
Downloading einops-0.8.1-py3-none-any.whl (64 kB)
Downloading fft_conv_pytorch-1.2.0-py3-none-any.whl (6.8 kB)
Downloading filelock-3.18.0-py3-none-any.whl (16 kB)
Downloading fsspec-2025.5.1-py3-none-any.whl (199 kB)
Downloading future-1.0.0-py3-none-any.whl (491 kB)
Downloading graphviz-0.20.3-py3-none-any.whl (47 kB)
Downloading imagecodecs-2025.3.30-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (45.6 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 45.6/45.6 MB 14.9 MB/s eta 0:00:00
Downloading jinja2-3.1.6-py3-none-any.whl (134 kB)
Using cached MarkupSafe-3.0.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (23 kB)
Downloading matplotlib-3.10.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (8.6 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 8.6/8.6 MB 15.3 MB/s eta 0:00:00
Downloading contourpy-1.3.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (326 kB)
Using cached cycler-0.12.1-py3-none-any.whl (8.3 kB)
Downloading fonttools-4.58.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.9 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.9/4.9 MB 16.4 MB/s eta 0:00:00
Downloading kiwisolver-1.4.8-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.4 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.4/1.4 MB 11.5 MB/s eta 0:00:00
Downloading pyparsing-3.2.3-py3-none-any.whl (111 kB)
Using cached python_dateutil-2.9.0.post0-py2.py3-none-any.whl (229 kB)
Using cached six-1.17.0-py2.py3-none-any.whl (11 kB)
Downloading msgpack-1.1.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (403 kB)
Downloading ndindex-1.10.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (528 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 528.2/528.2 kB 12.4 MB/s eta 0:00:00
Using cached nibabel-5.3.2-py3-none-any.whl (3.3 MB)
Downloading importlib_resources-6.5.2-py3-none-any.whl (37 kB)
Downloading numexpr-2.10.2-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (398 kB)
Using cached pandas-2.2.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (13.1 MB)
Downloading pytz-2025.2-py2.py3-none-any.whl (509 kB)
Downloading tzdata-2025.2-py2.py3-none-any.whl (347 kB)
Downloading platformdirs-4.3.8-py3-none-any.whl (18 kB)
Downloading py_cpuinfo-9.0.0-py3-none-any.whl (22 kB)
Using cached requests-2.32.3-py3-none-any.whl (64 kB)
Downloading charset_normalizer-3.4.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (147 kB)
Using cached idna-3.10-py3-none-any.whl (70 kB)
Downloading urllib3-2.4.0-py3-none-any.whl (128 kB)
Downloading certifi-2025.4.26-py3-none-any.whl (159 kB)
Downloading scikit_learn-1.6.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (13.5 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 13.5/13.5 MB 16.7 MB/s eta 0:00:00
Downloading joblib-1.5.1-py3-none-any.whl (307 kB)
Downloading threadpoolctl-3.6.0-py3-none-any.whl (18 kB)
Downloading seaborn-0.13.2-py3-none-any.whl (294 kB)
Downloading timm-1.0.15-py3-none-any.whl (2.4 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.4/2.4 MB 17.0 MB/s eta 0:00:00
Downloading huggingface_hub-0.32.4-py3-none-any.whl (512 kB)
Downloading hf_xet-1.1.3-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.8 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.8/4.8 MB 17.4 MB/s eta 0:00:00
Using cached PyYAML-6.0.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (762 kB)
Using cached tqdm-4.67.1-py3-none-any.whl (78 kB)
Downloading safetensors-0.5.3-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (471 kB)
Downloading torchvision-0.22.0-cp311-cp311-manylinux_2_28_x86_64.whl (7.4 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.4/7.4 MB 16.5 MB/s eta 0:00:00
Downloading unittest2-1.1.0-py2.py3-none-any.whl (96 kB)
Downloading argparse-1.4.0-py2.py3-none-any.whl (23 kB)
Downloading traceback2-1.4.0-py2.py3-none-any.whl (16 kB)
Downloading linecache2-1.0.0-py2.py3-none-any.whl (12 kB)
Downloading yacs-0.1.8-py3-none-any.whl (14 kB)
Building wheels for collected packages: nnunetv2, acvl-utils, dynamic-network-architectures, batchgenerators, batchgeneratorsv2
  Building wheel for nnunetv2 (pyproject.toml) ... done
  Created wheel for nnunetv2: filename=nnunetv2-2.6.2-py3-none-any.whl size=285890 sha256=a877dc7c2abc2dadcdb590deb314e44c3db99539c9ea76e63c8169e6d9f23f89
  Stored in directory: /home/vmiller/.cache/pip/wheels/99/ec/d2/0fb1be0015c40f2dc99535af585e41e876dd2b369039d9385b
  DEPRECATION: Building 'acvl-utils' using the legacy setup.py bdist_wheel mechanism, which will be removed in a future version. pip 25.3 will enforce this behaviour change. A possible replacement is to use the standardized build interface by setting the `--use-pep517` option, (possibly combined with `--no-build-isolation`), or adding a `pyproject.toml` file to the source tree of 'acvl-utils'. Discussion can be found at https://github.com/pypa/pip/issues/6334
  Building wheel for acvl-utils (setup.py) ... done
  Created wheel for acvl-utils: filename=acvl_utils-0.2.5-py3-none-any.whl size=27243 sha256=b714f3760ade5b99e9c4a63ed6fb7b2beca5e4a454c77f6f635240d5d76da66e
  Stored in directory: /home/vmiller/.cache/pip/wheels/3f/8c/10/dcba79e0b2d1d463605233cec1fc6cfad47af5230b8985e464
  DEPRECATION: Building 'dynamic-network-architectures' using the legacy setup.py bdist_wheel mechanism, which will be removed in a future version. pip 25.3 will enforce this behaviour change. A possible replacement is to use the standardized build interface by setting the `--use-pep517` option, (possibly combined with `--no-build-isolation`), or adding a `pyproject.toml` file to the source tree of 'dynamic-network-architectures'. Discussion can be found at https://github.com/pypa/pip/issues/6334
  Building wheel for dynamic-network-architectures (setup.py) ... done
  Created wheel for dynamic-network-architectures: filename=dynamic_network_architectures-0.4.1-py3-none-any.whl size=39075 sha256=8e828fab729c5497a15e6d4c8c4663f207cd40cf10c7e277b2f0e56834e6c34c
  Stored in directory: /home/vmiller/.cache/pip/wheels/41/6c/b8/690b97a6ea5b63e698d0efd0fd8842ae4f45657cd32b83bed4
  DEPRECATION: Building 'batchgenerators' using the legacy setup.py bdist_wheel mechanism, which will be removed in a future version. pip 25.3 will enforce this behaviour change. A possible replacement is to use the standardized build interface by setting the `--use-pep517` option, (possibly combined with `--no-build-isolation`), or adding a `pyproject.toml` file to the source tree of 'batchgenerators'. Discussion can be found at https://github.com/pypa/pip/issues/6334
  Building wheel for batchgenerators (setup.py) ... done
  Created wheel for batchgenerators: filename=batchgenerators-0.25.1-py3-none-any.whl size=93163 sha256=0dc637a2624dac0254494f359e1457742f74ed0b37bfa63239b00b98a4ad1223
  Stored in directory: /home/vmiller/.cache/pip/wheels/56/11/c7/fadca30e054c602093ffe36ba8a2f0a87dd2f86ac75191d3ed
  Building wheel for batchgeneratorsv2 (pyproject.toml) ... done
  Created wheel for batchgeneratorsv2: filename=batchgeneratorsv2-0.3.0-py3-none-any.whl size=65215 sha256=afe77b548d300a3a0a8c4c6cc7f0b4282e4b7854f89dfaa4b3e3f56089ac0ccc
  Stored in directory: /home/vmiller/.cache/pip/wheels/c0/c1/8f/94ca60255dbbadf27e1da4885002a6943c95b067b8e2dd39ea
Successfully built nnunetv2 acvl-utils dynamic-network-architectures batchgenerators batchgeneratorsv2
Installing collected packages: SimpleITK, pytz, py-cpuinfo, nvidia-cusparselt-cu12, mpmath, linecache2, argparse, urllib3, tzdata, typing-extensions, triton, traceback2, tqdm, threadpoolctl, sympy, six, safetensors, pyyaml, pyparsing, platformdirs, pillow, packaging, nvidia-nvtx-cu12, nvidia-nvjitlink-cu12, nvidia-nccl-cu12, nvidia-curand-cu12, nvidia-cufile-cu12, nvidia-cuda-runtime-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-cupti-cu12, nvidia-cublas-cu12, numpy, networkx, ndindex, msgpack, MarkupSafe, kiwisolver, joblib, importlib-resources, idna, hf-xet, graphviz, future, fsspec, fonttools, filelock, einops, cycler, charset-normalizer, certifi, yacs, unittest2, tifffile, scipy, requests, python-dateutil, nvidia-cusparse-cu12, nvidia-cufft-cu12, nvidia-cudnn-cu12, numexpr, nibabel, lazy-loader, jinja2, imageio, imagecodecs, contourpy, connected-components-3d, scikit-learn, scikit-image, pandas, nvidia-cusolver-cu12, matplotlib, huggingface_hub, blosc2, torch, seaborn, batchgenerators, torchvision, fft-conv-pytorch, acvl-utils, timm, batchgeneratorsv2, dynamic-network-architectures, nnunetv2
Successfully installed MarkupSafe-3.0.2 SimpleITK-2.5.0 acvl-utils-0.2.5 argparse-1.4.0 batchgenerators-0.25.1 batchgeneratorsv2-0.3.0 blosc2-3.3.4 certifi-2025.4.26 charset-normalizer-3.4.2 connected-components-3d-3.23.0 contourpy-1.3.2 cycler-0.12.1 dynamic-network-architectures-0.4.1 einops-0.8.1 fft-conv-pytorch-1.2.0 filelock-3.18.0 fonttools-4.58.1 fsspec-2025.5.1 future-1.0.0 graphviz-0.20.3 hf-xet-1.1.3 huggingface_hub-0.32.4 idna-3.10 imagecodecs-2025.3.30 imageio-2.37.0 importlib-resources-6.5.2 jinja2-3.1.6 joblib-1.5.1 kiwisolver-1.4.8 lazy-loader-0.4 linecache2-1.0.0 matplotlib-3.10.3 mpmath-1.3.0 msgpack-1.1.0 ndindex-1.10.0 networkx-3.5 nibabel-5.3.2 nnunetv2-2.6.2 numexpr-2.10.2 numpy-2.2.6 nvidia-cublas-cu12-12.6.4.1 nvidia-cuda-cupti-cu12-12.6.80 nvidia-cuda-nvrtc-cu12-12.6.77 nvidia-cuda-runtime-cu12-12.6.77 nvidia-cudnn-cu12-9.5.1.17 nvidia-cufft-cu12-11.3.0.4 nvidia-cufile-cu12-1.11.1.6 nvidia-curand-cu12-10.3.7.77 nvidia-cusolver-cu12-11.7.1.2 nvidia-cusparse-cu12-12.5.4.2 nvidia-cusparselt-cu12-0.6.3 nvidia-nccl-cu12-2.26.2 nvidia-nvjitlink-cu12-12.6.85 nvidia-nvtx-cu12-12.6.77 packaging-25.0 pandas-2.2.3 pillow-11.2.1 platformdirs-4.3.8 py-cpuinfo-9.0.0 pyparsing-3.2.3 python-dateutil-2.9.0.post0 pytz-2025.2 pyyaml-6.0.2 requests-2.32.3 safetensors-0.5.3 scikit-image-0.25.2 scikit-learn-1.6.1 scipy-1.15.3 seaborn-0.13.2 six-1.17.0 sympy-1.14.0 threadpoolctl-3.6.0 tifffile-2025.6.1 timm-1.0.15 torch-2.7.0 torchvision-0.22.0 tqdm-4.67.1 traceback2-1.4.0 triton-3.3.0 typing-extensions-4.14.0 tzdata-2025.2 unittest2-1.1.0 urllib3-2.4.0 yacs-0.1.8
(deleteme) [vmiller@goliath work]$ nnUNetv2_train 4 3d_fullres 0 -tr nnUNetTrainer_5epochs

############################
INFO: You are using the old nnU-Net default plans. We have updated our recommendations. Please consider using those instead! Read more here: https://github.com/MIC-DKFZ/nnUNet/blob/master/documentation/resenc_presets.md
############################

Using device: cuda:0

#######################################################################
Please cite the following paper when using nnU-Net:
Isensee, F., Jaeger, P. F., Kohl, S. A., Petersen, J., & Maier-Hein, K. H. (2021). nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nature methods, 18(2), 203-211.
#######################################################################

2025-06-04 11:22:53.011002: Using torch.compile...
2025-06-04 11:22:53.768734: do_dummy_2d_data_aug: False
2025-06-04 11:22:53.769686: Using splits from existing split file: /home/vmiller/work/nnUNet/nnUNet_preprocessed/Dataset004_Hippocampus/splits_final.json
2025-06-04 11:22:53.770301: The split file contains 5 splits.
2025-06-04 11:22:53.770719: Desired fold for training: 0
2025-06-04 11:22:53.771141: This split has 208 training and 52 validation cases.
using pin_memory on device 0
Exception in thread Thread-2 (results_loop):
Traceback (most recent call last):
  File "/home/vmiller/miniforge/envs/deleteme/lib/python3.11/threading.py", line 1045, in _bootstrap_inner
Traceback (most recent call last):
  File "/home/vmiller/miniforge/envs/deleteme/bin/nnUNetv2_train", line 8, in <module>
    sys.exit(run_training_entry())
             ^^^^^^^^^^^^^^^^^^^^
  File "/home/vmiller/miniforge/envs/deleteme/lib/python3.11/site-packages/nnunetv2/run/run_training.py", line 266, in run_training_entry
    run_training(args.dataset_name_or_id, args.configuration, args.fold, args.tr, args.p, args.pretrained_weights,
  File "/home/vmiller/miniforge/envs/deleteme/lib/python3.11/site-packages/nnunetv2/run/run_training.py", line 207, in run_training
    nnunet_trainer.run_training()
  File "/home/vmiller/miniforge/envs/deleteme/lib/python3.11/site-packages/nnunetv2/training/nnUNetTrainer/nnUNetTrainer.py", line 1363, in run_training
    self.run()
  File "/home/vmiller/miniforge/envs/deleteme/lib/python3.11/threading.py", line 982, in run
    self.on_train_start()
  File "/home/vmiller/miniforge/envs/deleteme/lib/python3.11/site-packages/nnunetv2/training/nnUNetTrainer/nnUNetTrainer.py", line 900, in on_train_start
    self.dataloader_train, self.dataloader_val = self.get_dataloaders()
                                                 ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/vmiller/miniforge/envs/deleteme/lib/python3.11/site-packages/nnunetv2/training/nnUNetTrainer/nnUNetTrainer.py", line 690, in get_dataloaders
    _ = next(mt_gen_train)
        ^^^^^^^^^^^^^^^^^^
  File "/home/vmiller/miniforge/envs/deleteme/lib/python3.11/site-packages/batchgenerators/dataloading/nondet_multi_threaded_augmenter.py", line 196, in __next__
    self._target(*self._args, **self._kwargs)
  File "/home/vmiller/miniforge/envs/deleteme/lib/python3.11/site-packages/batchgenerators/dataloading/nondet_multi_threaded_augmenter.py", line 125, in results_loop
    item = self.__get_next_item()
           ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/vmiller/miniforge/envs/deleteme/lib/python3.11/site-packages/batchgenerators/dataloading/nondet_multi_threaded_augmenter.py", line 181, in __get_next_item
    raise RuntimeError("One or more background workers are no longer alive. Exiting. Please check the "
RuntimeError: One or more background workers are no longer alive. Exiting. Please check the print statements above for the actual error message
    raise e
  File "/home/vmiller/miniforge/envs/deleteme/lib/python3.11/site-packages/batchgenerators/dataloading/nondet_multi_threaded_augmenter.py", line 103, in results_loop
    raise RuntimeError("One or more background workers are no longer alive. Exiting. Please check the "
RuntimeError: One or more background workers are no longer alive. Exiting. Please check the print statements above for the actual error message
(deleteme) [vmiller@goliath work]$ nnUNetv2_train 4 3d_fullres 0 -tr nnUNetTrainer_5epochs

############################
INFO: You are using the old nnU-Net default plans. We have updated our recommendations. Please consider using those instead! Read more here: https://github.com/MIC-DKFZ/nnUNet/blob/master/documentation/resenc_presets.md
############################

Using device: cuda:0

#######################################################################
Please cite the following paper when using nnU-Net:
Isensee, F., Jaeger, P. F., Kohl, S. A., Petersen, J., & Maier-Hein, K. H. (2021). nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nature methods, 18(2), 203-211.
#######################################################################

2025-06-04 11:25:58.100531: Using torch.compile...
2025-06-04 11:25:58.852352: do_dummy_2d_data_aug: False
2025-06-04 11:25:58.853314: Using splits from existing split file: /home/vmiller/work/nnUNet/nnUNet_preprocessed/Dataset004_Hippocampus/splits_final.json
2025-06-04 11:25:58.853923: The split file contains 5 splits.
2025-06-04 11:25:58.854347: Desired fold for training: 0
2025-06-04 11:25:58.854797: This split has 208 training and 52 validation cases.

This is the configuration used by this training:
Configuration name: 3d_fullres
 {'data_identifier': 'nnUNetPlans_3d_fullres', 'preprocessor_name': 'DefaultPreprocessor', 'batch_size': 9, 'patch_size': [40, 56, 40], 'median_image_size_in_voxels': [36.0, 50.0, 35.0], 'spacing': [1.0, 1.0, 1.0], 'normalization_schemes': ['ZScoreNormalization'], 'use_mask_for_norm': [False], 'resampling_fn_data': 'resample_data_or_seg_to_shape', 'resampling_fn_seg': 'resample_data_or_seg_to_shape', 'resampling_fn_data_kwargs': {'is_seg': False, 'order': 3, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_seg_kwargs': {'is_seg': True, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_probabilities': 'resample_data_or_seg_to_shape', 'resampling_fn_probabilities_kwargs': {'is_seg': False, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'architecture': {'network_class_name': 'dynamic_network_architectures.architectures.unet.PlainConvUNet', 'arch_kwargs': {'n_stages': 4, 'features_per_stage': [32, 64, 128, 256], 'conv_op': 'torch.nn.modules.conv.Conv3d', 'kernel_sizes': [[3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3]], 'strides': [[1, 1, 1], [2, 2, 2], [2, 2, 2], [2, 2, 2]], 'n_conv_per_stage': [2, 2, 2, 2], 'n_conv_per_stage_decoder': [2, 2, 2], 'conv_bias': True, 'norm_op': 'torch.nn.modules.instancenorm.InstanceNorm3d', 'norm_op_kwargs': {'eps': 1e-05, 'affine': True}, 'dropout_op': None, 'dropout_op_kwargs': None, 'nonlin': 'torch.nn.LeakyReLU', 'nonlin_kwargs': {'inplace': True}}, '_kw_requires_import': ['conv_op', 'norm_op', 'dropout_op', 'nonlin']}, 'batch_dice': False} 

These are the global plan.json settings:
 {'dataset_name': 'Dataset004_Hippocampus', 'plans_name': 'nnUNetPlans', 'original_median_spacing_after_transp': [1.0, 1.0, 1.0], 'original_median_shape_after_transp': [36, 50, 35], 'image_reader_writer': 'SimpleITKIO', 'transpose_forward': [0, 1, 2], 'transpose_backward': [0, 1, 2], 'experiment_planner_used': 'ExperimentPlanner', 'label_manager': 'LabelManager', 'foreground_intensity_properties_per_channel': {'0': {'max': 486420.21875, 'mean': 22360.326171875, 'median': 362.88250732421875, 'min': 0.0, 'percentile_00_5': 28.0, 'percentile_99_5': 277682.03125, 'std': 60656.1328125}}} 

2025-06-04 11:25:59.019124: Unable to plot network architecture: nnUNet_compile is enabled!
2025-06-04 11:25:59.025042: 
2025-06-04 11:25:59.026216: Epoch 0
2025-06-04 11:25:59.027193: Current learning rate: 0.01
/home/vmiller/miniforge/envs/deleteme/lib/python3.11/site-packages/torch/_inductor/lowering.py:7007: UserWarning: 
Online softmax is disabled on the fly since Inductor decides to
split the reduction. Cut an issue to PyTorch if this is an
important use case and you want to speed it up with online
softmax.

  warnings.warn(
/home/vmiller/miniforge/envs/deleteme/lib/python3.11/site-packages/torch/_inductor/lowering.py:7007: UserWarning: 
Online softmax is disabled on the fly since Inductor decides to
split the reduction. Cut an issue to PyTorch if this is an
important use case and you want to speed it up with online
softmax.

  warnings.warn(
/home/vmiller/miniforge/envs/deleteme/lib/python3.11/site-packages/torch/_inductor/lowering.py:7007: UserWarning: 
Online softmax is disabled on the fly since Inductor decides to
split the reduction. Cut an issue to PyTorch if this is an
important use case and you want to speed it up with online
softmax.

  warnings.warn(
/home/vmiller/miniforge/envs/deleteme/lib/python3.11/site-packages/torch/_inductor/lowering.py:7007: UserWarning: 
Online softmax is disabled on the fly since Inductor decides to
split the reduction. Cut an issue to PyTorch if this is an
important use case and you want to speed it up with online
softmax.

  warnings.warn(
2025-06-04 11:26:50.148759: train_loss -0.2161
2025-06-04 11:26:50.149921: val_loss -0.4913
2025-06-04 11:26:50.151183: Pseudo dice [np.float32(0.6032), np.float32(0.1903)]
2025-06-04 11:26:50.152279: Epoch time: 51.12 s
2025-06-04 11:26:50.153330: Yayy! New best EMA pseudo Dice: 0.396699994802475
2025-06-04 11:26:50.751540: 
2025-06-04 11:26:50.753049: Epoch 1
2025-06-04 11:26:50.754147: Current learning rate: 0.00818
2025-06-04 11:27:19.941406: train_loss -0.6408
2025-06-04 11:27:19.942913: val_loss -0.7902
2025-06-04 11:27:19.944066: Pseudo dice [np.float32(0.8504), np.float32(0.8317)]
2025-06-04 11:27:19.945191: Epoch time: 29.19 s
2025-06-04 11:27:19.946385: Yayy! New best EMA pseudo Dice: 0.44119998812675476
2025-06-04 11:27:20.618827: 
2025-06-04 11:27:20.620298: Epoch 2
2025-06-04 11:27:20.621491: Current learning rate: 0.00631
2025-06-04 11:27:51.828824: train_loss -0.7872
2025-06-04 11:27:51.830393: val_loss -0.8086
2025-06-04 11:27:51.831616: Pseudo dice [np.float32(0.8652), np.float32(0.8478)]
2025-06-04 11:27:51.832833: Epoch time: 31.21 s
2025-06-04 11:27:51.834051: Yayy! New best EMA pseudo Dice: 0.482699990272522
2025-06-04 11:27:52.540280: 
2025-06-04 11:27:52.541801: Epoch 3
2025-06-04 11:27:52.543083: Current learning rate: 0.00438
2025-06-04 11:28:22.866085: train_loss -0.81
2025-06-04 11:28:22.867663: val_loss -0.8107
2025-06-04 11:28:22.868860: Pseudo dice [np.float32(0.8649), np.float32(0.8505)]
2025-06-04 11:28:22.870043: Epoch time: 30.33 s
2025-06-04 11:28:22.871223: Yayy! New best EMA pseudo Dice: 0.5202000141143799
2025-06-04 11:28:23.533179: 
2025-06-04 11:28:23.534766: Epoch 4
2025-06-04 11:28:23.536022: Current learning rate: 0.00235
2025-06-04 11:28:53.448920: train_loss -0.819
2025-06-04 11:28:53.450556: val_loss -0.8245
2025-06-04 11:28:53.451812: Pseudo dice [np.float32(0.8746), np.float32(0.8609)]
2025-06-04 11:28:53.453040: Epoch time: 29.92 s
2025-06-04 11:28:53.454310: Yayy! New best EMA pseudo Dice: 0.5550000071525574
2025-06-04 11:28:54.222808: Training done.
2025-06-04 11:28:54.233453: Using splits from existing split file: /home/vmiller/work/nnUNet/nnUNet_preprocessed/Dataset004_Hippocampus/splits_final.json
2025-06-04 11:28:54.235491: The split file contains 5 splits.
2025-06-04 11:28:54.236760: Desired fold for training: 0
2025-06-04 11:28:54.238181: This split has 208 training and 52 validation cases.
2025-06-04 11:28:54.239837: predicting hippocampus_017
2025-06-04 11:28:54.242385: hippocampus_017, shape torch.Size([1, 32, 48, 35]), rank 0
2025-06-04 11:29:01.906956: predicting hippocampus_019
2025-06-04 11:29:01.911082: hippocampus_019, shape torch.Size([1, 41, 47, 36]), rank 0
2025-06-04 11:29:01.938163: predicting hippocampus_033
2025-06-04 11:29:01.942109: hippocampus_033, shape torch.Size([1, 38, 48, 33]), rank 0
2025-06-04 11:29:01.957989: predicting hippocampus_035
2025-06-04 11:29:01.961390: hippocampus_035, shape torch.Size([1, 37, 47, 35]), rank 0
2025-06-04 11:29:01.976962: predicting hippocampus_037
2025-06-04 11:29:01.981088: hippocampus_037, shape torch.Size([1, 32, 51, 34]), rank 0
2025-06-04 11:29:01.997565: predicting hippocampus_049
2025-06-04 11:29:02.002586: hippocampus_049, shape torch.Size([1, 36, 51, 35]), rank 0
2025-06-04 11:29:02.018603: predicting hippocampus_052
2025-06-04 11:29:02.021654: hippocampus_052, shape torch.Size([1, 40, 52, 34]), rank 0
2025-06-04 11:29:02.037899: predicting hippocampus_065
2025-06-04 11:29:02.041141: hippocampus_065, shape torch.Size([1, 37, 52, 39]), rank 0
2025-06-04 11:29:02.057227: predicting hippocampus_083
2025-06-04 11:29:02.061349: hippocampus_083, shape torch.Size([1, 37, 52, 33]), rank 0
2025-06-04 11:29:02.077442: predicting hippocampus_088
2025-06-04 11:29:02.080096: hippocampus_088, shape torch.Size([1, 35, 52, 40]), rank 0
2025-06-04 11:29:02.095476: predicting hippocampus_090
2025-06-04 11:29:02.099380: hippocampus_090, shape torch.Size([1, 40, 50, 37]), rank 0
2025-06-04 11:29:02.115000: predicting hippocampus_092
2025-06-04 11:29:02.117947: hippocampus_092, shape torch.Size([1, 28, 49, 38]), rank 0
2025-06-04 11:29:02.134171: predicting hippocampus_095
2025-06-04 11:29:02.137903: hippocampus_095, shape torch.Size([1, 39, 49, 34]), rank 0
2025-06-04 11:29:02.153291: predicting hippocampus_107
2025-06-04 11:29:02.157214: hippocampus_107, shape torch.Size([1, 34, 55, 35]), rank 0
2025-06-04 11:29:02.173272: predicting hippocampus_108
2025-06-04 11:29:02.176352: hippocampus_108, shape torch.Size([1, 37, 53, 36]), rank 0
2025-06-04 11:29:02.192235: predicting hippocampus_123
2025-06-04 11:29:02.196028: hippocampus_123, shape torch.Size([1, 38, 53, 32]), rank 0
2025-06-04 11:29:02.211983: predicting hippocampus_125
2025-06-04 11:29:02.214797: hippocampus_125, shape torch.Size([1, 39, 42, 43]), rank 0
2025-06-04 11:29:02.242164: predicting hippocampus_157
2025-06-04 11:29:02.245681: hippocampus_157, shape torch.Size([1, 35, 51, 36]), rank 0
2025-06-04 11:29:02.261784: predicting hippocampus_164
2025-06-04 11:29:02.265265: hippocampus_164, shape torch.Size([1, 47, 48, 41]), rank 0
2025-06-04 11:29:02.313225: predicting hippocampus_169
2025-06-04 11:29:02.316496: hippocampus_169, shape torch.Size([1, 39, 45, 36]), rank 0
2025-06-04 11:29:02.332286: predicting hippocampus_175
2025-06-04 11:29:02.335330: hippocampus_175, shape torch.Size([1, 35, 47, 33]), rank 0
2025-06-04 11:29:02.350870: predicting hippocampus_185
2025-06-04 11:29:02.354321: hippocampus_185, shape torch.Size([1, 33, 49, 35]), rank 0
2025-06-04 11:29:02.370753: predicting hippocampus_190
2025-06-04 11:29:02.373430: hippocampus_190, shape torch.Size([1, 30, 52, 37]), rank 0
2025-06-04 11:29:02.390723: predicting hippocampus_194
2025-06-04 11:29:02.393836: hippocampus_194, shape torch.Size([1, 30, 50, 35]), rank 0
2025-06-04 11:29:02.408960: predicting hippocampus_204
2025-06-04 11:29:02.411861: hippocampus_204, shape torch.Size([1, 39, 48, 36]), rank 0
2025-06-04 11:29:02.427484: predicting hippocampus_205
2025-06-04 11:29:02.430525: hippocampus_205, shape torch.Size([1, 32, 47, 32]), rank 0
2025-06-04 11:29:02.445868: predicting hippocampus_210
2025-06-04 11:29:02.449094: hippocampus_210, shape torch.Size([1, 40, 48, 34]), rank 0
2025-06-04 11:29:02.464308: predicting hippocampus_217
2025-06-04 11:29:02.467337: hippocampus_217, shape torch.Size([1, 27, 53, 38]), rank 0
2025-06-04 11:29:02.483474: predicting hippocampus_219
2025-06-04 11:29:02.486885: hippocampus_219, shape torch.Size([1, 39, 45, 37]), rank 0
2025-06-04 11:29:02.502116: predicting hippocampus_229
2025-06-04 11:29:02.505291: hippocampus_229, shape torch.Size([1, 35, 50, 33]), rank 0
2025-06-04 11:29:02.520869: predicting hippocampus_244
2025-06-04 11:29:02.523429: hippocampus_244, shape torch.Size([1, 30, 53, 38]), rank 0
2025-06-04 11:29:02.539072: predicting hippocampus_261
2025-06-04 11:29:02.542534: hippocampus_261, shape torch.Size([1, 33, 58, 36]), rank 0
2025-06-04 11:29:02.568784: predicting hippocampus_264
2025-06-04 11:29:02.572533: hippocampus_264, shape torch.Size([1, 37, 51, 38]), rank 0
2025-06-04 11:29:02.588181: predicting hippocampus_277
2025-06-04 11:29:02.591489: hippocampus_277, shape torch.Size([1, 29, 59, 33]), rank 0
2025-06-04 11:29:02.618575: predicting hippocampus_280
2025-06-04 11:29:02.621227: hippocampus_280, shape torch.Size([1, 32, 47, 37]), rank 0
2025-06-04 11:29:02.637369: predicting hippocampus_286
2025-06-04 11:29:02.640918: hippocampus_286, shape torch.Size([1, 46, 45, 37]), rank 0
2025-06-04 11:29:02.666840: predicting hippocampus_288
2025-06-04 11:29:02.669780: hippocampus_288, shape torch.Size([1, 42, 50, 38]), rank 0
2025-06-04 11:29:02.696931: predicting hippocampus_289
2025-06-04 11:29:02.700478: hippocampus_289, shape torch.Size([1, 36, 49, 35]), rank 0
2025-06-04 11:29:02.716425: predicting hippocampus_296
2025-06-04 11:29:02.720052: hippocampus_296, shape torch.Size([1, 35, 54, 35]), rank 0
2025-06-04 11:29:02.735699: predicting hippocampus_305
2025-06-04 11:29:02.738848: hippocampus_305, shape torch.Size([1, 30, 49, 34]), rank 0
2025-06-04 11:29:02.754385: predicting hippocampus_308
2025-06-04 11:29:02.758076: hippocampus_308, shape torch.Size([1, 40, 48, 38]), rank 0
2025-06-04 11:29:02.773344: predicting hippocampus_317
2025-06-04 11:29:02.776534: hippocampus_317, shape torch.Size([1, 34, 51, 33]), rank 0
2025-06-04 11:29:02.792629: predicting hippocampus_327
2025-06-04 11:29:02.795824: hippocampus_327, shape torch.Size([1, 27, 54, 36]), rank 0
2025-06-04 11:29:02.810897: predicting hippocampus_330
2025-06-04 11:29:02.814758: hippocampus_330, shape torch.Size([1, 33, 55, 35]), rank 0
2025-06-04 11:29:02.831229: predicting hippocampus_332
2025-06-04 11:29:02.834508: hippocampus_332, shape torch.Size([1, 33, 52, 35]), rank 0
2025-06-04 11:29:02.850320: predicting hippocampus_338
2025-06-04 11:29:02.853515: hippocampus_338, shape torch.Size([1, 43, 43, 37]), rank 0
2025-06-04 11:29:02.880418: predicting hippocampus_349
2025-06-04 11:29:02.884376: hippocampus_349, shape torch.Size([1, 34, 50, 34]), rank 0
2025-06-04 11:29:02.899876: predicting hippocampus_350
2025-06-04 11:29:02.903707: hippocampus_350, shape torch.Size([1, 34, 49, 35]), rank 0
2025-06-04 11:29:02.919631: predicting hippocampus_356
2025-06-04 11:29:02.923126: hippocampus_356, shape torch.Size([1, 37, 51, 36]), rank 0
2025-06-04 11:29:02.938176: predicting hippocampus_358
2025-06-04 11:29:02.941063: hippocampus_358, shape torch.Size([1, 34, 50, 35]), rank 0
2025-06-04 11:29:02.956669: predicting hippocampus_374
2025-06-04 11:29:02.959507: hippocampus_374, shape torch.Size([1, 39, 48, 38]), rank 0
2025-06-04 11:29:02.975405: predicting hippocampus_394
2025-06-04 11:29:02.978926: hippocampus_394, shape torch.Size([1, 32, 52, 36]), rank 0
2025-06-04 11:29:06.546597: Validation complete
2025-06-04 11:29:06.548338: Mean Validation Dice:  0.8725265530405955

vmiller987 avatar Jun 04 '25 15:06 vmiller987

I'm helping an RA who's using nnUNet and we ran into a dead worker issue and then got a useful error message after setting n_proc_DA=0, that a connection failure occurred while using multiple CPUs. I'm pretty sure that's message passing between cores.

We use CentOS 7 which is long in the tooth. To access a newer gcc/g++ I enabled the software collections devtools-7.

When we tried to set n_procs_DA greater than 0, we'd get a connection failure from pytorch.multiprocess.connect (sorry from memory). I was hoping it wasn't a kernel issue or permissions as I'm not too savvy with file sockets.

ChatGPT suggested that I change the start method: set_method_method('spawn') which I found commented out, here

That seems to work for us. I'll need to check on it later to make sure, but in the past it'd die within a couple of minutes.

toomanycats avatar Jul 11 '25 20:07 toomanycats