我用了您的代码跑了您的data里的几张图片 发现会在for step, data in enumerate(data_loader['train'])这步报错,原因是
Traceback (most recent call last):
File "D:/cc/segmentation_pytorch-master/main.py", line 80, in
main(args)
File "D:/cc/segmentation_pytorch-master/main.py", line 66, in main
train(data_loader, model, optimizer, scheduler, tb_writer, param_dict, continue_epoch)
File "D:\cc\segmentation_pytorch-master\utils\trainval.py", line 32, in train
for step, data in enumerate(data_loader['train']):
File "D:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 363, in next
data = self._next_data()
File "D:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 403, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "D:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data_utils\fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "D:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data_utils\fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "D:\cc\segmentation_pytorch-master\utils\dataset.py", line 51, in getitem
image, label = self.transform(image, label)
File "D:\cc\segmentation_pytorch-master\utils\aug_PIL.py", line 228, in call
image, label = t(image, label)
File "D:\cc\segmentation_pytorch-master\utils\aug_PIL.py", line 181, in call
image, label = getattr(self.aug_pil, aug_name)(image, label)
File "D:\cc\segmentation_pytorch-master\utils\aug_PIL.py", line 65, in random_resize_crop
image = tf.resized_crop(image, i, j, h, w, self.input_hw, interpolation=Image.BILINEAR)
File "D:\ProgramData\Anaconda3\lib\site-packages\torchvision\transforms\functional.py", line 499, in resized_crop
img = resize(img, size, interpolation)
File "D:\ProgramData\Anaconda3\lib\site-packages\torchvision\transforms\functional.py", line 324, in resize
raise TypeError('Got inappropriate size arg: {}'.format(size))
TypeError: Got inappropriate size arg: (256, 256)
这是什么原因 一直没解决
我这里代码写的还不是很完善,本来计划2个月内写完并测试的,但是由于工作原因,没有时间测试完善了,因此还是有一些代码问题的。需要自己探索,我只是提供了一些数据增强的思路而已。