Thái Thanh Tuấn
Thái Thanh Tuấn
I think this problem comes from a different PyTorch version.
> Do you have solved the problem? I am meeting the same problem, and I don't know how to solve it. As @TidamCo said, the input image should be in...
I think this is not the expected result for CP-VTON+. Maybe you got this bad for the wrong warped clothing (maybe from the wrong open pose or segmentation of reference...
The GMM network is like: input --> correlation --> transformation parameters TPS Then output = TPS transformation (input, transformation parameters TPS ) One easy way is you can increase the...
You can change the cp_dataset for pants by replace the label = 5 (top clothing) to the label of pants or skirt in your dataset. Some place that you have...
You have to change the network architecture to your size. (networks.py) and also the data augmentation. ------------------------ For fast test, you can resize all input, feed into network, then resize...
Sorry, Im wrong. I mean this. 
Only change the image input size and image output size, On line 116, Do not change here --> this is the number of channels not image size. If I have...
 The in put for the regression is based on the output of correlation. Please check by printing out the correlation variable in  Then, change the input_nc in line...
 At A position, print out the shape of x. (with normal size it is: 64 x 3 x 4 = 768, at B) Then, change the number at B...