syfbme
syfbme
模型输出的就是mask呀,你要是想只保留人像,就用mask扣一下图就行
batchsize must be an integer multiple of gpu number. In your case, you can set batchsize to 2.
The size of input and label must be the same
--netG local --load_pretrain the path you save for the first G There is no need to set continue_train You can refer to [this script](https://github.com/NVIDIA/pix2pixHD/blob/master/scripts/train_1024p_12G.sh)
"python train.py --name test --label_nc 0 --netG local --niter_fix_global 1 --niter 1 --niter_decay 1" There are 2 stages including 2(niter+niter_decay ) epoch. After 1(niter_fix_global) epoch, both global and local will...
Add "--tf_log" when training
same issue...
Hi @tlatlbtle I have the same question. Have you tried and how about the results?
global means only G1 will be trained. You could refer to figure3 in the paper. G1 means global while G2 means local
It's a data loader problem. When training domainA, there are 2 or 3 types of input: real_old_rgb, real_old_l, clean. In one batch, the number of old and clean are random....