HomoGAN
HomoGAN copied to clipboard
About the size of the picture
According to the README, the picture size should be (640, 360)
And the param.json provides that the crop_size is [384, 512]
And the rho should be 16
Based on the size given, when running the data_loader.py, there's a random_crop_tt function:
the height should be 360, the patch_size should be 384, the rho is 16
y = np.random.randint(self.rho, height - self.rho - patch_size_h)
so it should be
y = np.random.randint(16, -40)
returns an error
ValueError: Caught ValueError in DataLoader worker process 0. Original Traceback (most recent call last): File "/root/miniconda3/envs/exp/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 198, in _worker_loop data = fetcher.fetch(index) File "/root/miniconda3/envs/exp/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/root/miniconda3/envs/exp/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index] File "/root/autodl-tmp/Homo/dataset/data_loader.py", line 247, in getitem img1, img2, img1_patch, img2_patch, start = self.data_aug(img1, img2, self.horizontal_flip_aug) File "/root/autodl-tmp/Homo/dataset/data_loader.py", line 232, in data_aug img1, img2, img1_patch, img2_patch, start = random_crop_tt(img1, img2, start) File "/root/autodl-tmp/Homo/dataset/data_loader.py", line 209, in random_crop_tt y = np.random.randint(16, height - self.rho - patch_size_h ) File "mtrand.pyx", line 746, in numpy.random.mtrand.RandomState.randint File "_bounded_integers.pyx", line 1254, in numpy.random._bounded_integers._rand_int64 ValueError: low >= high
Im just curious how could it crop 384x512 patch from a 320x640 picture
Im just curious how could it crop 384x512 patch from a 320x640 picture
Thanks for the correction, the test image size is 360X640. In training, the image size is 480x720, while in testing, we directly resize the whole image (360X640) to 384x512
Im just curious how could it crop 384x512 patch from a 320x640 picture
Thanks for the correction, the test image size is 360X640. In training, the image size is 480x720, while in testing, we directly resize the whole image (360X640) to 384x512
So I should resize the image size to 480x720 myself when Im training the model? Cuz if I just follow along the README and don't make any other changes the image size would keep 360x640 during training and returns an error