u-net-brain-tumor
u-net-brain-tumor copied to clipboard
while trainig (python train.py --task=all) i got error
while trainig (python train.py --task=all) i got error
Traceback (most recent call last):
File "train.py", line 250, in
how can i solve this one ? thanks in advance for helping :)
three solutions:
- find a machine with more memory
- smaller batch size
- reimplement the data loading part with tensorflow dataset API
hope it helps.
hey :)
- i have machine with 16gig ram is it enough ?
- what you mean by batch sie exactly ?
- where is data loading part exactly ? due you mean that i should replace this code . if DATA_SIZE == 'all': HGG_path_list = tl.files.load_folder_list(path=HGG_data_path) LGG_path_list = tl.files.load_folder_list(path=LGG_data_path) elif DATA_SIZE == 'half': HGG_path_list = tl.files.load_folder_list(path=HGG_data_path)[0:100]# DEBUG WITH SMALL DATA LGG_path_list = tl.files.load_folder_list(path=LGG_data_path)[0:30] # DEBUG WITH SMALL DATA elif DATA_SIZE == 'small': HGG_path_list = tl.files.load_folder_list(path=HGG_data_path)[0:50] # DEBUG WITH SMALL DATA LGG_path_list = tl.files.load_folder_list(path=LGG_data_path)[0:20] # DEBUG WITH SMALL DATA
Actually i am trying to run you solution with brats2018 dataset .
I also got this same issue. I found it is not because of batch size, and the error is in nib.load(image_path).get_data(). So, could you tell us use which tensorflow load api could replace this nib load???
how did you resolve this problem?