I am getting a cropped output image even if the original input image size is (512,512,3).
I am getting a cropped output derained image, as the program in test.py seems to be providing a cropped version of the input image to the network, even if the input image size is (512,512,3).
Actual input image(size 512):
The input to the network seems to be something like this:
I have added the following code snippet in test.py for figuring out what the input is:
val_target_cpu, val_input_cpu = val_target_cpu.float().cuda(), val_input_cpu.float().cuda() val_batch_output = torch.FloatTensor(val_input.size()).fill_(0) #added by me print(val_input_cpu[0,:,:,:].size()) x=val_input_cpu[0,:,:,:] print(x.size()) x=x.data.cpu().permute(1,2,0) print(x.size()) x=x.mul(255).clamp(0,255).byte().numpy()[:,:,:] print(x.shape) filename='./result_all/new_model_data/testing_our_our/'+str(i)+'_input.jpg' i=Image.fromarray(x) i.save(filename) #ends here val_input.resize_as_(val_input_cpu).copy_(val_input_cpu) val_target=Variable(val_target_cpu, volatile=True)
The output is:

Have you solved it?I have the same problem