taming-transformers
taming-transformers copied to clipboard
Load an example segmentation and visualize
I get this issue when I use my own image in the Load an example segmentation and visualize section
How can I fix this?? Thanks.
IndexError Traceback (most recent call last)
<ipython-input-46-1334a87733d0> in <module>()
4 segmentation = Image.open(segmentation_path)
5 segmentation = np.array(segmentation)
----> 6 segmentation = np.eye(182)[segmentation]
7 segmentation = torch.tensor(segmentation.transpose(2,0,1)[None]).to(dtype=torch.float32, device=model.device)
IndexError: index 255 is out of bounds for axis 0 with size 182
I guess you loaded an image instead of a segmentation map. The segmentation path should point to a file that is segmented like: data/sflckr_segmentations/norway/25735082181_999927fe5a_b.png
If it's a segmentation file, it might be discussed in #8.
How can I convert image to segmentation map??
I haven't tried it yet, but I guess https://github.com/CompVis/taming-transformers/blob/master/scripts/extract_segmentation.py is computing the segmentation.
where can I find the segmentation files online?
In this folder are multiple folders with example files. https://github.com/CompVis/taming-transformers/tree/master/data/sflckr_segmentations
I want to test non-nature example files. I want to try segmentations that have people in it.
I tried this segmentation map but still, get the same error.
You need to segment the areas in exactly the same way, as they did in this repro. I don't think that it will work well since it's not optimized for humans. Additionally I don't think this segmentation will work well for humans even if it's optimized. In the New Zealand example image are sheeps on a meadow and it has huge problems in generating sheeps that fit into the given places. I think the method needs some artistic freedom to generate good results. Therefore segmentation maps with a lot of details seem not to work very well.
In the paper are examples with humans, but they don't use these segmentation maps for those.
But if you want to try it, just download the repro, put your image with humans in the data/sflckr_images folder, add it to the data/sflckr_examples.txt file and then run the scripts/extract_segmentation.py file. This should generate the segmentations.