research-contributions icon indicating copy to clipboard operation
research-contributions copied to clipboard

Training in 2D images

Open noseDewdrop opened this issue 2 years ago • 3 comments

Thanks for ur great job, I'm trying apply it on 2D images which were created by 3D datas of Brats2019. All of them has the shape of (160, 160). Therefore, I set them in the same folder.Dataset is ok, but I got this error when I run the main.py.

ValueError: input image size (img_size) should be divisible by stage-wise image resolution.

Here's the output on command .

python3.8 main.py --feature_size=48 --batch_size=1 --logdir=unetr_test_dir --fold=1 --optim_lr=1e-4 --lrschedule=warmup_cosine --infer_overlap=0.5 --save_checkpoint --val_every=10 --json_list='./jsons/brats21_folds.json' --data_dir=/mnt/sdb_newdisk/ljc_cnu/circle_cut/Tasks/MICCAI/ --use_checkpoint --noamp

0 gpu 0 Batch size is: 1 epochs 300 Traceback (most recent call last): File "main.py", line 230, in main() File "main.py", line 104, in main main_worker(gpu=0, args=args) File "main.py", line 130, in main_worker model = SwinUNETR( File "/home/ljc_cnu/anaconda3/envs/swin/lib/python3.8/site-packages/monai/networks/nets/swin_unetr.py", line 111, in init raise ValueError("input image size (img_size) should be divisible by stage-wise image resolution.") ValueError: input image size (img_size) should be divisible by stage-wise image resolution.

can u give me a solution?

noseDewdrop avatar Oct 06 '22 11:10 noseDewdrop

That's the result after I change the roi from 96 to 48, maybe that's wrong. But I got the another error when I set that with 96.

Traceback (most recent call last): File "/home/ljc_cnu/anaconda3/envs/swin/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop data = fetcher.fetch(index) File "/home/ljc_cnu/anaconda3/envs/swin/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/ljc_cnu/anaconda3/envs/swin/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/ljc_cnu/anaconda3/envs/swin/lib/python3.8/site-packages/monai/data/dataset.py", line 105, in getitem return self._transform(index) File "/home/ljc_cnu/anaconda3/envs/swin/lib/python3.8/site-packages/monai/data/dataset.py", line 91, in _transform return apply_transform(self.transform, data_i) if self.transform is not None else data_i File "/home/ljc_cnu/anaconda3/envs/swin/lib/python3.8/site-packages/monai/transforms/transform.py", line 118, in apply_transform raise RuntimeError(f"applying transform {transform}") from e RuntimeError: applying transform <monai.transforms.compose.Compose object at 0x7fc9c22c4940>

noseDewdrop avatar Oct 06 '22 11:10 noseDewdrop

Thanks for ur great job, I'm trying apply it on 2D images which were created by 3D datas of Brats2019. All of them has the shape of (160, 160). Therefore, I set them in the same folder.Dataset is ok, but I got this error when I run the main.py.

ValueError: input image size (img_size) should be divisible by stage-wise image resolution.

Here's the output on command .

python3.8 main.py --feature_size=48 --batch_size=1 --logdir=unetr_test_dir --fold=1 --optim_lr=1e-4 --lrschedule=warmup_cosine --infer_overlap=0.5 --save_checkpoint --val_every=10 --json_list='./jsons/brats21_folds.json' --data_dir=/mnt/sdb_newdisk/ljc_cnu/circle_cut/Tasks/MICCAI/ --use_checkpoint --noamp

0 gpu 0 Batch size is: 1 epochs 300 Traceback (most recent call last): File "main.py", line 230, in main() File "main.py", line 104, in main main_worker(gpu=0, args=args) File "main.py", line 130, in main_worker model = SwinUNETR( File "/home/ljc_cnu/anaconda3/envs/swin/lib/python3.8/site-packages/monai/networks/nets/swin_unetr.py", line 111, in init raise ValueError("input image size (img_size) should be divisible by stage-wise image resolution.") ValueError: input image size (img_size) should be divisible by stage-wise image resolution.

can u give me a solution?

I don't know if you still have that issue, but I would try to crop the images to a multiple of 96, so for example 192 as it is the closest to 160.

marvnmtz avatar Dec 15 '22 10:12 marvnmtz

@noseDewdrop hello,do you remember how to solve this problem?

Saillxl avatar Jun 19 '24 11:06 Saillxl