H-DenseUNet icon indicating copy to clipboard operation
H-DenseUNet copied to clipboard

The keras problem about the bash_train.sh

Open GilbertKun opened this issue 5 years ago • 18 comments

hello I download the HdenseUnet from your github.But some error occured when I run the code:sh bash_train.sh Traceback (most recent call last): File "train_2ddense.py", line 215, in train_and_predict() File "train_2ddense.py", line 210, in train_and_predict workers=3, use_multiprocessing=False) File "Keras-2.0.8/keras/legacy/interfaces.py", line 87, in wrapper return func(*args, **kwargs) File "Keras-2.0.8/keras/engine/training.py", line 2011, in fit_generator generator_output = next(output_generator) StopIteration Could you give me some help? Thanks!

GilbertKun avatar May 07 '19 02:05 GilbertKun

That means the generator is not generating data. Please whether the data root is correct? Whether you have load the data.

xmengli avatar May 07 '19 02:05 xmengli


Fitting model......

Then the Exception occured.

Exception in thread Thread-2: Traceback (most recent call last): File "/home/awifi/anaconda3/envs/denseu/lib/python2.7/threading.py", line 801, in __bootstrap_inner self.run() File "/home/awifi/anaconda3/envs/denseu/lib/python2.7/threading.py", line 754, in run self.__target(*self.__args, **self.__kwargs) File "Keras-2.0.8/keras/utils/data_utils.py", line 568, in data_generator_task generator_output = next(self._generator) ValueError: generator already executing

/home/awifi/anaconda3/envs/denseu/lib/python2.7/site-packages/skimage/transform/_warps.py:110: UserWarning: be enabled by default in skimage 0.15 to avoid aliasing artifacts when down-sampling images. warn("Anti-aliasing will be enabled by default in skimage 0.15 to " Exception in thread Thread-1: Traceback (most recent call last): File "/home/awifi/anaconda3/envs/denseu/lib/python2.7/threading.py", line 801, in __bootstrap_inner self.run() File "/home/awifi/anaconda3/envs/denseu/lib/python2.7/threading.py", line 754, in run self.__target(*self.__args, **self.__kwargs) File "Keras-2.0.8/keras/utils/data_utils.py", line 568, in data_generator_task generator_output = next(self._generator) File "train_2ddense.py", line 120, in generate_arrays_from_file result_list = pool.map(load_seq_crop_data_masktumor_try, Parameter_List) File "/home/awifi/anaconda3/envs/denseu/lib/python2.7/multiprocessing/pool.py", line 251, in map return self.map_async(func, iterable, chunksize).get() File "/home/awifi/anaconda3/envs/denseu/lib/python2.7/multiprocessing/pool.py", line 567, in get raise self._value ValueError: low >= high

I change 131 to 5 and change the worker from 3 to 1,then it works.

GilbertKun avatar May 07 '19 03:05 GilbertKun

It may be the limitation in CPU?

xmengli avatar May 07 '19 03:05 xmengli

So what is the recommended CPU configuration?

GilbertKun avatar May 07 '19 03:05 GilbertKun

Having the same problem. Isn't the data root Liver Tumor Segmentation Challenge data set? the one you download and not the one you prepossessed? any update on the CPU configuration?

mnc1423 avatar Jul 05 '19 02:07 mnc1423

The data is the LiTS challenge dataset. You can download it from the challenge website.

xmengli avatar Jul 05 '19 02:07 xmengli

Im using Intel(R) Xeon(R) Silver 4116 CPU should this cause a problem?

mnc1423 avatar Jul 05 '19 02:07 mnc1423

I load the whole dataset once before training the network. If you can load the whole dataset, I think the hardware is OK.

xmengli avatar Jul 05 '19 02:07 xmengli

Thanks i'll try that :)

mnc1423 avatar Jul 05 '19 02:07 mnc1423

Can i ask what tensorflow, cuda, cudnn version you're using for this build?

mnc1423 avatar Jul 05 '19 03:07 mnc1423

I attach the requirement file.

absl-py==0.1.10 backports-abc==0.5 backports.functools-lru-cache==1.5 backports.weakref==1.0rc1 bleach==1.5.0 bokeh==0.12.15 certifi==2018.1.18 cffi==1.11.5 chardet==3.0.4 click==6.7 cloudpickle==0.5.2 cycler==0.10.0 cytoolz==0.9.0.1 dask==0.17.2 decorator==4.3.0 distributed==1.21.6 dominate==2.3.1 easydict==1.4 enum34==1.1.6 funcsigs==1.0.2 functools32==3.2.3.post2 futures==3.2.0 h5py==2.8.0 heapdict==1.0.0 html5lib==0.9999999 idna==2.6 imageio==2.3.0 Jinja2==2.10 kiwisolver==1.0.1 linecache2==1.0.0 locket==0.2.0 Markdown==2.6.11 MarkupSafe==1.0 matplotlib==2.2.2 MedPy==0.3.0 mkl-fft==1.0.0 mkl-random==1.0.1 mock==2.0.0 msgpack-python==0.5.6 networkx==2.1 nibabel==2.3.1 nltk==2.0.4 numpy==1.14.3 olefile==0.45.1 packaging==17.1 pairwise==0.1 pandas==0.22.0 partd==0.3.8 pathlib==1.0.1 pbr==4.0.2 Pillow==5.0.0 protobuf==3.5.2 psutil==5.4.5 pycparser==2.18 pydicom==1.2.1 pyparsing==2.2.0 python-dateutil==2.7.2 pytz==2018.4 PyWavelets==0.5.2 PyYAML==3.12 pyzmq==17.0.0 requests==2.18.4 scikit-image==0.13.1 scikit-learn==0.19.1 scipy==1.1.0 singledispatch==3.4.0.3 six==1.11.0 sortedcontainers==1.5.10 subprocess32==3.2.7 tblib==1.3.2 tensorflow==1.5.1 tensorflow-gpu==1.2.1 tensorflow-tensorboard==1.5.1 toolz==0.9.0 torch==0.3.0.post4 torchfile==0.1.0 torchvision==0.2.0 tornado==5.0.2 tqdm==4.28.1 traceback2==1.4.0 unittest2==1.1.0 urllib3==1.22 visdom==0.1.8.3 webencodings==0.5 websocket-client==0.48.0 Werkzeug==0.14.1 zict==0.1.3

xmengli avatar Jul 05 '19 03:07 xmengli

Hi i have a question, if i wanted to train the data on different CT do i need to change something? i'm currently trying on a different CT with image and segmentation but keeps giving me an error Traceback (most recent call last): File "Ktrain_2ddense.py", line 259, in train_and_predict() File "Ktrain_2ddense.py", line 228, in train_and_predict workers=3, use_multiprocessing=True) File "Keras-2.0.8/keras/legacy/interfaces.py", line 87, in wrapper return func(*args, **kwargs) File "Keras-2.0.8/keras/engine/training.py", line 2011, in fit_generator generator_output = next(output_generator) StopIteration

mnc1423 avatar Jul 17 '19 01:07 mnc1423

Hi i have a question, if i wanted to train the data on different CT do i need to change something? i'm currently trying on a different CT with image and segmentation but keeps giving me an error Traceback (most recent call last): File "Ktrain_2ddense.py", line 259, in train_and_predict() File "Ktrain_2ddense.py", line 228, in train_and_predict workers=3, use_multiprocessing=True) File "Keras-2.0.8/keras/legacy/interfaces.py", line 87, in wrapper return func(*args, **kwargs) File "Keras-2.0.8/keras/engine/training.py", line 2011, in fit_generator generator_output = next(output_generator) StopIteration Hello! I have the same problem.Would you please tell me how did you solve it?

guolihong avatar Sep 21 '19 13:09 guolihong

Fitting model......

Then the Exception occured.

Exception in thread Thread-2: Traceback (most recent call last): File "/home/awifi/anaconda3/envs/denseu/lib/python2.7/threading.py", line 801, in __bootstrap_inner self.run() File "/home/awifi/anaconda3/envs/denseu/lib/python2.7/threading.py", line 754, in run self.__target(*self.__args, **self.__kwargs) File "Keras-2.0.8/keras/utils/data_utils.py", line 568, in data_generator_task generator_output = next(self._generator) ValueError: generator already executing

/home/awifi/anaconda3/envs/denseu/lib/python2.7/site-packages/skimage/transform/_warps.py:110: UserWarning: be enabled by default in skimage 0.15 to avoid aliasing artifacts when down-sampling images. warn("Anti-aliasing will be enabled by default in skimage 0.15 to " Exception in thread Thread-1: Traceback (most recent call last): File "/home/awifi/anaconda3/envs/denseu/lib/python2.7/threading.py", line 801, in __bootstrap_inner self.run() File "/home/awifi/anaconda3/envs/denseu/lib/python2.7/threading.py", line 754, in run self.__target(*self.__args, **self.__kwargs) File "Keras-2.0.8/keras/utils/data_utils.py", line 568, in data_generator_task generator_output = next(self._generator) File "train_2ddense.py", line 120, in generate_arrays_from_file result_list = pool.map(load_seq_crop_data_masktumor_try, Parameter_List) File "/home/awifi/anaconda3/envs/denseu/lib/python2.7/multiprocessing/pool.py", line 251, in map return self.map_async(func, iterable, chunksize).get() File "/home/awifi/anaconda3/envs/denseu/lib/python2.7/multiprocessing/pool.py", line 567, in get raise self._value ValueError: low >= high

I change 131 to 5 and change the worker from 3 to 1,then it works.

I have the same problem with you . I change 131 to 5 and change the worker from 3 to 1,then it works. do you know why is it?

lyxlynn avatar Dec 05 '19 07:12 lyxlynn

Hi i have a question, if i wanted to train the data on different CT do i need to change something? i'm currently trying on a different CT with image and segmentation but keeps giving me an error Traceback (most recent call last): File "Ktrain_2ddense.py", line 259, in train_and_predict() File "Ktrain_2ddense.py", line 228, in train_and_predict workers=3, use_multiprocessing=True) File "Keras-2.0.8/keras/legacy/interfaces.py", line 87, in wrapper return func(*args, **kwargs) File "Keras-2.0.8/keras/engine/training.py", line 2011, in fit_generator generator_output = next(output_generator) StopIteration

I have the same question when I training on my own data. I wonder how you solved it. thx!

larry-11 avatar Apr 11 '20 10:04 larry-11

Hi i have a question, if i wanted to train the data on different CT do i need to change something? i'm currently trying on a different CT with image and segmentation but keeps giving me an error Traceback (most recent call last): File "Ktrain_2ddense.py", line 259, in train_and_predict() File "Ktrain_2ddense.py", line 228, in train_and_predict workers=3, use_multiprocessing=True) File "Keras-2.0.8/keras/legacy/interfaces.py", line 87, in wrapper return func(*args, **kwargs) File "Keras-2.0.8/keras/engine/training.py", line 2011, in fit_generator generator_output = next(output_generator) StopIteration Hello! I have the same problem.Would you please tell me how did you solve it?

same problem. Have you solved it now?

larry-11 avatar Apr 11 '20 10:04 larry-11

I have the code in my repo https://github.com/mnc1423/H-DenseUNet/blob/master/train_2ddense.py Can't remember how i did it but hope this helps

mnc1423 avatar Apr 14 '20 15:04 mnc1423

I load the whole dataset once before training the network. If you can load the whole dataset, I think the hardware is OK.

Hi, because the memory is not enouph,what should I do to load data by batch not the whole dataset? Thank u.

ee-hash avatar Sep 14 '21 09:09 ee-hash