cp-vton-plus
cp-vton-plus copied to clipboard
Unexpected results from test model when running TOM
There seems to be artifacts when running TOM testing. Does this suggest that the input to the TOM is incorrectly given or are you seeing the same results as below? The input is fixed as reported here: #8 FYI.


No, this is not expected as well. Here, looks like two people got merged. Same issue from listdir? Can you do the same input debugging here as well?
Corrected segmentation will look like this:

And Final TOM output will be like this:

Yes, the segmentation is correct. I think the issue now occurs due to the input in the TOM. Where do TOM fetch the contents of the segmentations? I can't seem to find it.
It should be in cp_dataset.py file.
Thanks! Checking and will get back to you. Really appreciate the quick answers! Also, the final results of TOM: Are they the ones in "p_rendered" or "try-on" ?
You are welcome. The "try-on" is the final result. We blend the "p_rendered" results with the warped_clothes to get the "try_on" results.
@minar09 , I am getting closer and find that part of the reason for the issue in the TOM stage happens on line in the function
test_tom()
on line 156 in test.py:
im_names = inputs['im_name']
I find that if I sort this list of file names by:
im_names.sort()
, I get better results where most of the try-ons look good. I found this through seeing that the composite image and the rendered image with the same file names sometimes camse from different images.
However, I am still seeing artifacts and mixed images that doesn't look right. I uploaded my entire infered output on the dropbox link below and would really appreciate if you could confirm that it's still in fact incorrect. Many outputs look good though as far as I can tell?
Do you have any idea what could cause these issues? I should also mentioned that I am running your code without Nvidia GPU and thus commented out all uses of cuda(). Do you think this can have any effect?
Thanks again for all your help!
Dropbox: https://www.dropbox.com/sh/db2d9tsv4e4038f/AAB2m6QvG8KflVTbumKNW5zWa?dl=0
Successful cases:

Unsuccessful cases?


Hi @RubReh , good to know that you are getting better results. Yes, some of the results still are not as expected. For GMM and TOM results, maybe we shouldn't sort the file names, since the files' ordering comes from the test_pairs.txt file. If you can debug the inputs of the wrong ones, I think you can find the mismatches. CPU or GPU maybe doesn't have any effect on this. Sorry that I can't be of help much right now. Thank you very much for your efforts.
Hi again @minar09. Seems the batch size is causing the issue. I seem to generate the correct dataset as long as I change the batch size to 1 instead of 4 as in your default. Can you confirm that this in fact looks like your results?
https://www.dropbox.com/sh/ooau1wl9d0od53y/AAAe4YBc48Mq41bA8AZ2h8Esa?dl=0
If you still need it I can create a PR for the data prep fix and add a readme line about the batch sizing?
Hi @RubReh , yeah, these results look like as the expected ones. Thank you very much for your time and effort. Sure, your pull request is welcome. That would be a great help for others trying out our repository. Thanks.
Hi @RubReh , I am facing this issue, Could you please help me fix it. Thanks a lot in advance.