DISAPPEARED13
DISAPPEARED13
if i cancel the assert sentence, torch.nn 's 3D layers are not suported by torchstat
Hi, there. I am here to update my problem. For debugging and I found that the problem is from `softmax_mse_loss` after change it to `nn.MSEloss()` the problem of _loss up...
Oh, sure, thanks for reminding me of the difference for pseudo and positive keys. I forget the Eq.(6) so that I think all day for the influnce on the unsupervise...
> Did you mean using different alpha for unsupervised loss (Eq. (6)) and contrastive loss (Eq. (10))? Yes, sure. But might be just let the supervise loss play a main-guided...
hi, there, it's a mistake I said num_threads in dataloader, it should be in dataaugmentation method. To fasten the epoch time, I change the num_threads in dataaugmentation function, after change...
It's obvious slowly that when run ``` _ = self.tr_gen.next() _ = self.val_gen.next() ``` it tooks very much time. Is it normal?
Hi, there. I change the augment function by editing the params(from `default_3D_augmentation_params`), there are two characters inside, 1. params['num_threads'] = 8 2. params['num_cached_per_thread'] = 1 after I change the `num_threads`...
Sure. You are right! Generally this parameters are defined in `default_3D_augmentation_params` and `default_2D_augmentation_params` like this, ``` "num_threads": 12 if 'nnUNet_n_proc_DA' not in os.environ else int(os.environ['nnUNet_n_proc_DA']), "num_cached_per_thread": 1, ``` but I...
CPU: Intel(R) Xeon(R) Gold 5218 CPU $ 2.30GHz/16 core GPU: Nvidia GeForce RTX 2080Ti with 11019MiB available, totally 8 GPUs
Though the CPU is intensive :-