GLCIC-PyTorch
GLCIC-PyTorch copied to clipboard
Places2 or ImageNet dataset?
Excuse me,I'd like to know if you have models running on places2 or Imagenet datasets, and if you can provide them. Thank you!
@zhengbowei
Actually, I tried to do that, but gave it up finally. places2 and imagenet are far larger datasets compared to celeba, then I found that it takes more than 3 or 4 months to complete the whole training process in my environment (GTX 1080 Ti X 4). I cannot use my gpus for months only to create a model. This is why I gave it up.
I am going to write a paper, which uses the code you implement. Can I put your website in the paper? thank you for your contribution.
---Original--- From: "Otenim"<[email protected]> Date: Thu, Sep 19, 2019 03:55 AM To: "otenim/GLCIC-PyTorch"<[email protected]>; Cc: "Mention"<[email protected]>;"zhengbowei"<[email protected]>; Subject: Re: [otenim/GLCIC-PyTorch] Places2 or ImageNet dataset? (#10)
@zhengbowei
Actually, I tried to do that, but gave it up finally. places2 and imagenet are far larger datasets compared to celeba, then I found that it takes more than 3 or 4 months to complete the whole training process in my environment (GTX 1080 Ti X 4). I cannot use my gpus for months only to create a model. This is why I gave it up.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.
@zhengbowei
Of course, feel free to cite my code base. I hope your paper get accepted in a good conference :)
hello, I use your code to train more than 1 million ImageNet images, but the effect is very poor, the inpainting effect is blurred, I would like to ask what is the different parameter settings between training imagenet and training celeba? Thank you 。
---Original--- From: "Otenim"<[email protected]> Date: Thu, Sep 19, 2019 16:09 PM To: "otenim/GLCIC-PyTorch"<[email protected]>; Cc: "Mention"<[email protected]>;"zhengbowei"<[email protected]>; Subject: Re: [otenim/GLCIC-PyTorch] Places2 or ImageNet dataset? (#10)
@zhengbowei
Of course, feel free to cite my code base. I hope your paper get accepted in a good conference :)
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.
Imagenet was apparently not used to train the proposed model in the original paper, so it's difficult to tell a promising training setting.
However, the paper says they used Places2. This dataset is similar to Imagenet, so I think the training setting they used for Places2 would work to some extent. Please add the following options when executing the training script:
--hole_min_w 96 --hole_max_w 128 --hole_min_h 96 --hole_max_h 128 --cn_input_size 256 --ld_input_size 128 --bsize 96 --data_parallel --arc places2
The last option --arc
changes the network architecture of Context Discriminator slightly.
Please check models.py if you're interested.
Be careful about memory utilization of GPUs. The above setting requires a large amount of memory (the authors used Tesla K80(24GB) x 4 to conduct the experiments).
hello, I training my project using Place2 datasets and the last option --arc is seted as places2. but on Training Phase 2 , the code reports error:"RuntimeError: size mismatch, m1: [24 x 4608], m2: [2048 x 1024] at /pytorch/aten/src/THC/generic/THCTensorMathBlas.cu:268" In models.py: def forward(self, x): x = self.bn1(self.act1(self.conv1(x))) x = self.bn2(self.act2(self.conv2(x))) x = self.bn3(self.act3(self.conv3(x))) x = self.bn4(self.act4(self.conv4(x))) x = self.bn5(self.act5(self.conv5(x))) if self.arc == 'celeba': x = self.act6(self.linear6(self.flatten6(x))) elif self.arc == 'places2': x = self.bn6(self.act6(self.conv6(x))) x = self.act7(self.linear7(self.flatten7(x))) return x
Did you apply these options when executing the training script ?