About multi-threading and single-threading
Hello, I rewrote a multi-task learning framework for nnUNet. In short, in addition to the segmentation task, there is also a task similar to learning the connectivity between image pixels. I rewrote the dataset, dataloader, and trainer, mainly adding another additional label reading. These are the background.
The problem is that when I use multi-threaded dataloader, it seems to cause memory leaks, that is, using the code:
mt_gen_train = LimitedLenWrapper(self.num_iterations_per_epoch, data_loader=dl_tr, transform=tr_transforms,
num_processes=allowed_num_processes, num_cached=6, seeds=None,
pin_memory=self.device.type == 'cuda', wait_time=0.02)
And the error is as follows:
Traceback (most recent call last):
File "/home/i/miniconda3/envs/nnUNet/bin/nnUNetv2_train", line 8, in
For training, nnUNet_compile=False helps, but I also have to do nnUNet_n_proc_DA=0 every single time or I get the background workers die when training. I cannot run it without this line.
Hello, I have been modifying his network architecture recently, mainly by adding an additional input and output, which is now dual input and dual output. However, I encountered a problem with the data loader. Can we communicate? Thank you.