I trained on my private dataset using the same format as VOC2007 but encountered this error. I can run on VOC2007 successfully
Traceback (most recent call last):
File "/datashare3/charis/code/recaps/fasterRcnn/fasterrcnn-pytorch-training-pipeline-main/train.py", line 571, in
main(args)
File "/datashare3/charis/code/recaps/fasterRcnn/fasterrcnn-pytorch-training-pipeline-main/train.py", line 420, in main
scaler=SCALER
File "/datashare3/charis/code/recaps/fasterRcnn/fasterrcnn-pytorch-training-pipeline-main/torch_utils/engine.py", line 45, in train_one_epoch
for images, targets in metric_logger.log_every(data_loader, print_freq, header):
File "/datashare3/charis/code/recaps/fasterRcnn/fasterrcnn-pytorch-training-pipeline-main/torch_utils/utils.py", line 173, in log_every
for obj in iterable:
File "/datashare3/charis/anaconda/envs/simp/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 628, in next
data = self._next_data()
File "/datashare3/charis/anaconda/envs/simp/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1333, in _next_data
return self._process_data(data)
File "/datashare3/charis/anaconda/envs/simp/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1359, in _process_data
data.reraise()
File "/datashare3/charis/anaconda/envs/simp/lib/python3.7/site-packages/torch/_utils.py", line 543, in reraise
raise exception
ValueError: Caught ValueError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/datashare3/charis/anaconda/envs/simp/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop
data = fetcher.fetch(index)
File "/datashare3/charis/anaconda/envs/simp/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 58, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/datashare3/charis/anaconda/envs/simp/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 58, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/datashare3/charis/code/recaps/fasterRcnn/fasterrcnn-pytorch-training-pipeline-main/datasets.py", line 316, in getitem
labels=labels)
File "/datashare3/charis/anaconda/envs/simp/lib/python3.7/site-packages/albumentations/core/composition.py", line 207, in call
p.preprocess(data)
File "/datashare3/charis/anaconda/envs/simp/lib/python3.7/site-packages/albumentations/core/utils.py", line 83, in preprocess
data[data_name] = self.check_and_convert(data[data_name], rows, cols, direction="to")
File "/datashare3/charis/anaconda/envs/simp/lib/python3.7/site-packages/albumentations/core/utils.py", line 91, in check_and_convert
return self.convert_to_albumentations(data, rows, cols)
File "/datashare3/charis/anaconda/envs/simp/lib/python3.7/site-packages/albumentations/core/bbox_utils.py", line 142, in convert_to_albumentations
return convert_bboxes_to_albumentations(data, self.params.format, rows, cols, check_validity=True)
File "/datashare3/charis/anaconda/envs/simp/lib/python3.7/site-packages/albumentations/core/bbox_utils.py", line 408, in convert_bboxes_to_albumentations
return [convert_bbox_to_albumentations(bbox, source_format, rows, cols, check_validity) for bbox in bboxes]
File "/datashare3/charis/anaconda/envs/simp/lib/python3.7/site-packages/albumentations/core/bbox_utils.py", line 408, in
return [convert_bbox_to_albumentations(bbox, source_format, rows, cols, check_validity) for bbox in bboxes]
File "/datashare3/charis/anaconda/envs/simp/lib/python3.7/site-packages/albumentations/core/bbox_utils.py", line 352, in convert_bbox_to_albumentations
check_bbox(bbox)
File "/datashare3/charis/anaconda/envs/simp/lib/python3.7/site-packages/albumentations/core/bbox_utils.py", line 435, in check_bbox
raise ValueError(f"Expected {name} for bbox {bbox} to be in the range [0.0, 1.0], got {value}.")
ValueError: Expected y_min for bbox (tensor(0.5800), tensor(1.0111), tensor(0.7067), tensor(1.), tensor(1)) to be in the range [0.0, 1.0], got 1.0110957622528076.
It looks like, for some images, the coordinate for y_min has been annotated out of the image border. You may need to check which images have that issue.
Hello, I have just pushed an update to datasets.py that removes all files with invalid bounding boxes before training. Please check. I am closing the issue for now. Please re-open if needed.