edge-connect icon indicating copy to clipboard operation
edge-connect copied to clipboard

how to get the trainset on places2?

Open ljjcoder opened this issue 6 years ago • 8 comments

Thank you for your excellent work. In your paper, you introduced how to get 256x256 images from original images on celebA and Paris StreetView. but for places2, how do you get the 256x256 images?

ljjcoder avatar Feb 26 '19 03:02 ljjcoder

@ljjcoder If you already have downloaded the high-resolution Places2 dataset, you can set INPUT_SIZE: 256 in your configuration file and change the following line in the code and pass centerCrop=False argument to the resize method to prevent center cropping: https://github.com/knazeri/edge-connect/blob/97c28c62ac54a59212cc9db4e78f36c5436c0b72/src/dataset.py#L141

If you don't have the high-resolution dataset, you can download 256x256 version from Places2 website under Data of Places-Extra69 section. You can also find a 256x256 version for the validation and test sets on the same page.

knazeri avatar Feb 26 '19 03:02 knazeri

@knazeri ,thanks for your reply!do you mean that it only needs to change the mask = self.resize(mask, imgh, imgw) to mask = self.resize(mask, imgh, imgw,centerCrop=False)? if only do this ,the mask resize to 256 directly but the img also use center crop. I guess it also need the def resize(self, img, height, width, centerCrop=True): to def resize(self, img, height, width, centerCrop=False):. Is it right?

ljjcoder avatar Mar 08 '19 16:03 ljjcoder

@knazeri I also ask what is different between Data of Places-Extra69 and Data of Places365-Challenge 2016? which one do you use? or both of them are used?

ljjcoder avatar Mar 08 '19 16:03 ljjcoder

@ljjcoder You don't need to change the method definition. Only change the method call. Of course that is if you have already downloaded the high-resolution version of the Places2 dataset. We have used the 256x256 version of Places2-Challenge 2016 full dataset for training!

knazeri avatar Mar 08 '19 18:03 knazeri

@knazeri yes, I download the high-resolution version of the Places2. I still feel confused that if I just change change the the mask = self.resize(mask, imgh, imgw) to mask = self.resize(mask, imgh, imgw,centerCrop=False), the original image still use center crop. it is same as your training data?

ljjcoder avatar May 07 '19 15:05 ljjcoder

@ljjcoder Honestly it doesn't really make any difference. You can either center crop an image or resize it to a fixed size. In either of these scenarios, the mask hides some part of the image and your network learns to inpaint the missing part! Like I mentioned before, our training dataset was the 256x256 version of the Places2 dataset.

knazeri avatar May 18 '19 00:05 knazeri

你好,我最近也在跑这个代码。可以加你交流一下吗?我的微信:loveanshen 我的QQ:519838354 我的邮箱:[email protected] 非常期待你百忙中的回复

anshen666 avatar Dec 10 '19 08:12 anshen666

Files in Data of Places-Extra69 section is just 1.4G(256*256), it contains only 98,721 images for training. Is it enough to train the model?

napohou avatar Nov 19 '21 14:11 napohou