cp-vton-plus icon indicating copy to clipboard operation
cp-vton-plus copied to clipboard

Hello, why 40 * loss_gic

Open shufangxun opened this issue 4 years ago • 5 comments

Lgic = gicloss(grid)
# 200x200 = 40.000 * 0.001
Lgic = Lgic / (grid.shape[0] * grid.shape[1] * grid.shape[2])
loss = Lwarp + 40 * Lgic    # total GMM loss

why loss_gic and 200x200=40x0.001 ?

shufangxun avatar Jul 13 '20 14:07 shufangxun

@shufangxun I think your question is about the number of 40, This is hyperparameter and we got this through some trainings. At first, we tried with loss = Lwarp + lambda * Lgic, (lambda = 1) We monitor loss of Lwarp and Lgic and when training. Refer to other previous works, we choose lambda = 40 so that Lgic contributes to the final loss.

If lambda is too big, then cloth can not be deformed much, similar to Affine transform.

In Addition to the calculation of this regulation term Lgic, we also tried some kind of calculation base on L2, L1 (we use L1 in the paper).

thaithanhtuan avatar Jul 13 '20 15:07 thaithanhtuan

@shufangxun I think your question is about the number of 40, This is hyperparameter and we got this through some trainings. At first, we tried with loss = Lwarp + lambda * Lgic, (lambda = 1) We monitor loss of Lwarp and Lgic and when training. Refer to other previous works, we choose lambda = 40 so that Lgic contributes to the final loss.

If lambda is too big, then cloth can not be deformed much, similar to Affine transform.

In Addition to the calculation of this regulation term Lgic, we also tried some kind of calculation base on L2, L1 (we use L1 in the paper).

thanks. besides, how about the relative scale between loss_warp and loss_gic? here is mine,about 10 : 1

step:      600, time: 0.405, loss: 0.404728, loss_warp: 0.363407, loss_reg: 0.041321
step:      620, time: 0.409, loss: 0.389795, loss_warp: 0.350896, loss_reg: 0.038900
step:      640, time: 0.416, loss: 0.457816, loss_warp: 0.418512, loss_reg: 0.039304
step:      660, time: 0.407, loss: 0.675993, loss_warp: 0.621634, loss_reg: 0.054359
step:      680, time: 0.403, loss: 0.610995, loss_warp: 0.567957, loss_reg: 0.043037
step:      700, time: 0.410, loss: 0.339125, loss_warp: 0.298946, loss_reg: 0.040179
step:      720, time: 0.409, loss: 0.560807, loss_warp: 0.516052, loss_reg: 0.044755
step:      740, time: 0.412, loss: 0.665463, loss_warp: 0.628696, loss_reg: 0.036767
step:      760, time: 0.410, loss: 0.461326, loss_warp: 0.414140, loss_reg: 0.047186

shufangxun avatar Jul 14 '20 06:07 shufangxun

You can find new segmentation (after neck correction) and pretrained model here. https://drive.google.com/drive/folders/1fol0mMvrgjGE5lZlqR7y-7LhOOraU1wQ?usp=sharing

thaithanhtuan avatar Jul 14 '20 07:07 thaithanhtuan

You can find new segmentation (after neck correction) and pretrained model here. https://drive.google.com/drive/folders/1fol0mMvrgjGE5lZlqR7y-7LhOOraU1wQ?usp=sharing

thanks..but my problem is the loss scale between loss_warp and loss_gic, mine is about 10 : 1.

How about your experiment set

shufangxun avatar Jul 14 '20 07:07 shufangxun

We made a lot of training, so I'm not sure this one is the final set up. image Hope this help.

thaithanhtuan avatar Jul 15 '20 10:07 thaithanhtuan