pix2pixHD
pix2pixHD copied to clipboard
Bias in the data
Hi all,
I'm working with students on a project using pix2pixHD and, while we were trying to reproduce the results on images from an other dataset, we believe we found a bias in the data used for training the network. There is a 3 black pixels wide padding around the example images, which seems to be needed to get correct generated images (if you try new images without the padding you get poor results).
I though this might be worth mentioning somewhere for anyone trying to reproduce the results on other datasets. Also I think this is linked to some other issues here.
@D3lt4lph4 Can you please elaborate what this is discussing about??
We beleive we found biais in the input data.
We are trying to reproduce the results from this repository using an other dataset. From this dataset we have image such as:
and using this as input (semantic label + instance map with the correct encoding of instance map) we were getting results like the following:
we searched a bit and found that by adding the following mask:
we were getting:
Still not perfect but way better. Apparently, there is a frame of 3 (maybe 5, not sure about the number i'll have to ask the students again) pixels required to get correct output.
I'm not sure about what to do with that information, maybe some warning on the main readme ? I just saw other issues with problems to reproduce on other datasets and thought this might help.
Thank you for giving reply. Can you tell, how you added this mask to the input data?
I not sure exactly, the students did it, but if I remember correctly the mask is a specific class in the semantic label map, so once you extracted it from the examples in the repository you just have to iterate over your mask image and replace in the new image if you have the mask value.
Something like that (I haven't tested it, it's just for the algorithm):
rows, cols, _ = mask.shape
for r in rows:
for c in cols:
if mask[r,c] == mask_value:
new_semantic_label_map[r,c] = mask_value
I encountered the same phenomenon