cp-vton-plus
cp-vton-plus copied to clipboard
Hello, why 40 * loss_gic
Lgic = gicloss(grid)
# 200x200 = 40.000 * 0.001
Lgic = Lgic / (grid.shape[0] * grid.shape[1] * grid.shape[2])
loss = Lwarp + 40 * Lgic # total GMM loss
why loss_gic and 200x200=40x0.001 ?
@shufangxun I think your question is about the number of 40, This is hyperparameter and we got this through some trainings. At first, we tried with loss = Lwarp + lambda * Lgic, (lambda = 1) We monitor loss of Lwarp and Lgic and when training. Refer to other previous works, we choose lambda = 40 so that Lgic contributes to the final loss.
If lambda is too big, then cloth can not be deformed much, similar to Affine transform.
In Addition to the calculation of this regulation term Lgic, we also tried some kind of calculation base on L2, L1 (we use L1 in the paper).
@shufangxun I think your question is about the number of 40, This is hyperparameter and we got this through some trainings. At first, we tried with loss = Lwarp + lambda * Lgic, (lambda = 1) We monitor loss of Lwarp and Lgic and when training. Refer to other previous works, we choose lambda = 40 so that Lgic contributes to the final loss.
If lambda is too big, then cloth can not be deformed much, similar to Affine transform.
In Addition to the calculation of this regulation term Lgic, we also tried some kind of calculation base on L2, L1 (we use L1 in the paper).
thanks. besides, how about the relative scale between loss_warp and loss_gic? here is mine,about 10 : 1
step: 600, time: 0.405, loss: 0.404728, loss_warp: 0.363407, loss_reg: 0.041321
step: 620, time: 0.409, loss: 0.389795, loss_warp: 0.350896, loss_reg: 0.038900
step: 640, time: 0.416, loss: 0.457816, loss_warp: 0.418512, loss_reg: 0.039304
step: 660, time: 0.407, loss: 0.675993, loss_warp: 0.621634, loss_reg: 0.054359
step: 680, time: 0.403, loss: 0.610995, loss_warp: 0.567957, loss_reg: 0.043037
step: 700, time: 0.410, loss: 0.339125, loss_warp: 0.298946, loss_reg: 0.040179
step: 720, time: 0.409, loss: 0.560807, loss_warp: 0.516052, loss_reg: 0.044755
step: 740, time: 0.412, loss: 0.665463, loss_warp: 0.628696, loss_reg: 0.036767
step: 760, time: 0.410, loss: 0.461326, loss_warp: 0.414140, loss_reg: 0.047186
You can find new segmentation (after neck correction) and pretrained model here. https://drive.google.com/drive/folders/1fol0mMvrgjGE5lZlqR7y-7LhOOraU1wQ?usp=sharing
You can find new segmentation (after neck correction) and pretrained model here. https://drive.google.com/drive/folders/1fol0mMvrgjGE5lZlqR7y-7LhOOraU1wQ?usp=sharing
thanks..but my problem is the loss scale between loss_warp and loss_gic, mine is about 10 : 1.
How about your experiment set
We made a lot of training, so I'm not sure this one is the final set up.
Hope this help.