GLCIC-PyTorch icon indicating copy to clipboard operation
GLCIC-PyTorch copied to clipboard

Places2 or ImageNet dataset?

Open zhengbowei opened this issue 5 years ago • 7 comments

Excuse me,I'd like to know if you have models running on places2 or Imagenet datasets, and if you can provide them. Thank you!

zhengbowei avatar Sep 16 '19 00:09 zhengbowei

@zhengbowei

Actually, I tried to do that, but gave it up finally. places2 and imagenet are far larger datasets compared to celeba, then I found that it takes more than 3 or 4 months to complete the whole training process in my environment (GTX 1080 Ti X 4). I cannot use my gpus for months only to create a model. This is why I gave it up.

otenim avatar Sep 18 '19 19:09 otenim

I am going to write a paper, which uses the code you implement. Can I put your website in the paper? thank you for your contribution.

---Original--- From: "Otenim"<[email protected]> Date: Thu, Sep 19, 2019 03:55 AM To: "otenim/GLCIC-PyTorch"<[email protected]>; Cc: "Mention"<[email protected]>;"zhengbowei"<[email protected]>; Subject: Re: [otenim/GLCIC-PyTorch] Places2 or ImageNet dataset? (#10)

@zhengbowei

Actually, I tried to do that, but gave it up finally. places2 and imagenet are far larger datasets compared to celeba, then I found that it takes more than 3 or 4 months to complete the whole training process in my environment (GTX 1080 Ti X 4). I cannot use my gpus for months only to create a model. This is why I gave it up.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

zhengbowei avatar Sep 18 '19 23:09 zhengbowei

@zhengbowei

Of course, feel free to cite my code base. I hope your paper get accepted in a good conference :)

otenim avatar Sep 19 '19 08:09 otenim

hello, I use your code to train more than 1 million ImageNet images, but the effect is very poor, the inpainting effect is blurred, I would like to ask what is the different parameter settings between training imagenet and training celeba?  Thank you 。

---Original--- From: "Otenim"<[email protected]> Date: Thu, Sep 19, 2019 16:09 PM To: "otenim/GLCIC-PyTorch"<[email protected]>; Cc: "Mention"<[email protected]>;"zhengbowei"<[email protected]>; Subject: Re: [otenim/GLCIC-PyTorch] Places2 or ImageNet dataset? (#10)

@zhengbowei

Of course, feel free to cite my code base. I hope your paper get accepted in a good conference :)

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

zhengbowei avatar Sep 24 '19 08:09 zhengbowei

Imagenet was apparently not used to train the proposed model in the original paper, so it's difficult to tell a promising training setting.

However, the paper says they used Places2. This dataset is similar to Imagenet, so I think the training setting they used for Places2 would work to some extent. Please add the following options when executing the training script:

--hole_min_w 96 --hole_max_w 128 --hole_min_h 96 --hole_max_h 128 --cn_input_size 256 --ld_input_size 128 --bsize 96 --data_parallel --arc places2

The last option --arc changes the network architecture of Context Discriminator slightly. Please check models.py if you're interested.

Be careful about memory utilization of GPUs. The above setting requires a large amount of memory (the authors used Tesla K80(24GB) x 4 to conduct the experiments).

otenim avatar Sep 24 '19 18:09 otenim

hello, I training my project using Place2 datasets and the last option --arc is seted as places2. but on Training Phase 2 , the code reports error:"RuntimeError: size mismatch, m1: [24 x 4608], m2: [2048 x 1024] at /pytorch/aten/src/THC/generic/THCTensorMathBlas.cu:268" In models.py: def forward(self, x): x = self.bn1(self.act1(self.conv1(x))) x = self.bn2(self.act2(self.conv2(x))) x = self.bn3(self.act3(self.conv3(x))) x = self.bn4(self.act4(self.conv4(x))) x = self.bn5(self.act5(self.conv5(x))) if self.arc == 'celeba': x = self.act6(self.linear6(self.flatten6(x))) elif self.arc == 'places2': x = self.bn6(self.act6(self.conv6(x))) x = self.act7(self.linear7(self.flatten7(x))) return x

zhengbowei avatar Dec 15 '19 11:12 zhengbowei

Did you apply these options when executing the training script ?

otenim avatar Dec 15 '19 13:12 otenim