DeblurGANv2 icon indicating copy to clipboard operation
DeblurGANv2 copied to clipboard

Perceptual Loss computation

Open rogertrullo opened this issue 4 years ago • 9 comments

Hi, thanks for sharing your code. I was reading the implementation of the PerceptualLoss and I am wondering why only the first image of the batch is normalized: https://github.com/VITA-Group/DeblurGANv2/blob/c5bbf780b152d5b877d9911b8323898d5f2e0a50/models/losses.py#L50-L51

Is the code intended to only work with batchsize=1?

rogertrullo avatar Sep 24 '20 18:09 rogertrullo

Hi @rogertrullo, have you found the answer?

diyar-m avatar Nov 10 '20 08:11 diyar-m

Hi @rogertrullo, have you found the answer?

Hi @diyar-m although I never got a reply, I assumed it was a bug. I made some mods to the code but I haven’t tested it

rogertrullo avatar Nov 10 '20 08:11 rogertrullo

I'm gonna change it and test it too.

I'll let you know if I'd got a better result.

diyar-m avatar Nov 10 '20 08:11 diyar-m

Great! Let me know how it went once you have it :)

rogertrullo avatar Nov 10 '20 08:11 rogertrullo

I got this error message. @rogertrullo

Traceback (most recent call last): | File "train.py", line 256, in | trainer.train(resume_train=config['resume']['resume_training']) | File "train.py", line 56, in train | self._run_epoch(epoch) | File "train.py", line 125, in _run_epoch | loss_content = self.criterionG(outputs, targets) | File "/home/avir/Desktop/DeblurGANv2/models/losses.py", line 59, in call | return self.get_loss(fakeIm, realIm) | File "/home/avir/Desktop/DeblurGANv2/models/losses.py", line 50, in get_loss | fakeIm[:, :, :, :] = self.transform(fakeIm[:, :, :, :]) | File "/home/avir/Desktop/DeblurGANv2/.env/lib/python3.6/site-packages/torchvision/transforms/trans| forms.py", line 212, in call | return F.normalize(tensor, self.mean, self.std, self.inplace) | File "/home/avir/Desktop/DeblurGANv2/.env/lib/python3.6/site-packages/torchvision/transforms/funct| ional.py", line 284, in normalize | '{}.'.format(tensor.size())) | ValueError: Expected tensor to be a tensor image of size (C, H, W). Got tensor.size() = torch.Size([| 6, 3, 256, 256]).

diyar-m avatar Nov 10 '20 08:11 diyar-m

I dont have the code at hand now, but yeah the thing is that the transform expects individial images and not a batch of images, I cant remember exactly how I handled it but it is definetly doable

rogertrullo avatar Nov 10 '20 08:11 rogertrullo

Yeah. I'm using a for to iterate between images and transform each one individially. image

diyar-m avatar Nov 10 '20 08:11 diyar-m

yeah I think I did the same with a map or sth but escentially the same

rogertrullo avatar Nov 10 '20 09:11 rogertrullo

是的,我想我在地图或某物上做了相同的操作,但在外观上却是相同的 你好,请问您是如何修改的代码

ldlshizhu avatar Jan 27 '21 09:01 ldlshizhu