zhanglu
zhanglu
same as above
stylegan2-encoder-new-full can be replaced by https://github.com/rolux/stylegan2encoder
I try to turn poly question into rectangles annotations.Just like this: x=[] y=[] for point in i['points']: x.append(point['x']) y.append(point['y']) ll.extend([min(x),min(y)]) ll.extend([max(x),min(y)]) ll.extend([max(x),max(y)]) ll.extend([min(x),max(y)])
If only you made a performance dashboard.(^-^)
The first two character lists are not used in the source code?
@aitorzip Just like Pix2Pix(pix2pixBEGAN.pytorch),ReplayBuffer was used to get the condition GAN loss. I think ReplayBuffer in here doesn't benifit the D model.Am I wrong?
The same question with you,do you have the answer?
> if torch.cuda.device_count() > 1: print("Let's use", torch.cuda.device_count(), "GPUs!") netG_A2B = torch.nn.DataParallel(netG_A2B) netG_B2A = torch.nn.DataParallel(netG_B2A) netD_A = torch.nn.DataParallel(netD_A) netD_B = torch.nn.DataParallel(netD_B) if opt.cuda: netG_A2B.cuda() netG_B2A.cuda() netD_A.cuda() netD_B.cuda()
in and out means channels' numer,not the image size
在训练D时可以把fakeA2B和fakeB2A进行detach,这样可以省去G的梯度计算,但是训练G时是依赖D的梯度的,不能省去D的梯度计算,但是不必重新计算一次fakeA2B和fakeB2A In training D, fakeA2B and fakeB2A can be detached, so that the gradient calculation of G can be omitted; But in training G, the gradient calculation of D can...