CRAFT-Reimplementation
CRAFT-Reimplementation copied to clipboard
RuntimeError: each element in list of batch should be of equal size
I get this error when I run python trainSynth.py
with SynthText as dataset
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 521, in __next__
data = self._next_data()
File "/usr/local/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 561, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/usr/local/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch
return self.collate_fn(data)
File "/usr/local/lib/python3.9/site-packages/torch/utils/data/_utils/collate.py", line 84, in default_collate
return [default_collate(samples) for samples in transposed]
File "/usr/local/lib/python3.9/site-packages/torch/utils/data/_utils/collate.py", line 84, in <listcomp>
return [default_collate(samples) for samples in transposed]
File "/usr/local/lib/python3.9/site-packages/torch/utils/data/_utils/collate.py", line 82, in default_collate
raise RuntimeError('each element in list of batch should be of equal size')
RuntimeError: each element in list of batch should be of equal size
can you show me more infos?
Sure, I prepared a notebook which you can use to reproduce the error. Just note that I'm using a subset of the SynthText dataset due to disk storage limits on google colab, so there are several steps that run before your code that will download the subset and filter out only the corresponding data while maintaining the same shapes expected by a slightly modified version of your code. Besides, I tried running your code with the full dataset without any modifications on my behalf, it runs without displaying any progress for over an hour, so I'm assuming something is failing silently somewhere.
any updates on this?
@unsignedrant Have you solved this problem? I meet the same issue.
@ziyeZzz No, I haven't and I reconsidered using craft for text detection because of the absence of implementation details in the paper as well as here on github, all of the existing implementations similar to this one are broken.
@unsignedrant I just found that if set bs=1, the issue can pass. But I haven't figured out how to make bs>1 .
When I use pytorch1.8.1,python3.7.9, batch_size>1, the code will report "each element in list of batch should be of equal size"
If pytorch1.8.1,python3.7.9, batch_size=1, when training_step>1w(nearby), training_loss=nan.
When I use pytorch 1.5.1, python 3.5, the code can work fine regardless the value of batch_size.
Do you have any idea of this issue? @backtime92
When I use pytorch1.8.1,python3.7.9, batch_size>1, the code will report "each element in list of batch should be of equal size"
If pytorch1.8.1,python3.7.9, batch_size=1, when training_step>1w(nearby), training_loss=nan.
When I use pytorch 1.5.1, python 3.5, the code can work fine regardless the value of batch_size.
Do you have any idea of this issue? @backtime92
delete the variable "confidences", the len of confidences will vary from image to image