pix2pixHD
pix2pixHD copied to clipboard
I ran the pix2pixHD code, the training results are good, however the test results are very bad.
Hi, I ran the pix2pixHD code on my dataset. The training results are good, however the test results are so bad. I also test the trained model on the training data, the results are also so bad. In this experiment, I used 795 data for training, and 654 data for testing. Could you give some suggestions. Thanks a lot!
Can I ask the size of you images?It seems you have successfully trained,but I still can't train the 512_p data even using the given examples,It always shows cuda out of memory,so I wonder if you have this problems?
Hi, I successfully trained the network under the image size 256256, but now I also use 256256 image, the network shows cuda out of memory. So, now, I don’t know how to run it. Oh, very bad.
在 2018年7月19日,上午8:43,594cp [email protected] 写道:
Can I ask the size of you images?It seems you have successfully trained,but I still can't train the 512_p data even using the given examples,It always shows cuda out of memory,so I wonder if you have this problems?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.
@Sunsunny11 If you're using pytorch >= 0.4, please make sure to pull the latest code, which has with torch.no_grad()
in the inference function.
same question,the test results are very bad.
Has anyone managed to find the problem? I use the latest code and has the same issue (good results when training, bad results when testing)
same question,the test results are very bad.
My training dataset is 512*512 paired grayscale images. Training results look very good. But test results are really bad and even turn into RGB colors.
Make sure you're using the same generator for both testing & training. The README suggests setting netG to local
and ngf to 32
when testing, but the defaults for training will be global
and 64
, so you'll be using a different gen if you follow it.
The default gen is more memory intensive, so you might have to start cropping and even shrink loadSize, fineSize to avoid OOM. If your images aren't too big, you might be able to avoid that by doing inference with torch.no_grad()
by changing this here (by default, it only does that if you're using pytorch 0.4).
It may be result of not using same parameters for the test as were used during traning- --no_instance, --n_downsample_global --n_downsample_E --n_blocks_local --n_local_enhancers etc. Those parameters must be same as in traning or it will be set to default and will cause artifacts.
Hi, I still have a similar problem. I checked all the parameters and made them the same for training and testing. But still, the testing results are bad but the training results are very good.
@tcwang0509 @Sunsunny11 @primejava @ShaniGam @noncomputable
Has your problem been solved? I also encountered such problems. My training effect is very good, but the test effect is very vague. There are 320 pieces of training data (only two scenes), with a resolution of 2560*1440. The test effect is very vague. I iterated 200,300,400 steps respectively, but the test results are very vague.
Hope to get your help!!
Did anyone find any solutions I'm having the same trouble
Did anyone find any solutions I'm having the same trouble
hello, could you adress this question ?
你好,邮件已收到,祝你万事如意,生活愉快!
你好,邮件已收到,祝你万事如意,生活愉快!
你好,师兄,请问你有解决这个问题的思路吗 , 方便交流一下吗?