Semantic_Human_Matting
Semantic_Human_Matting copied to clipboard
tnet loss
hello guys, I've trained the model for a while with about 25k generated data with 5k foreground images and about 50k background images at random. After some epochs, while the total loss is about 0.02, TNET loss is about 0.27. I didn't pre-trained the TNET because if so, when I change the train type to 'end-to-end' TNET loss increases dramatically. So the questions are:
-
how TNET loss affects the total loss. I mean if TNET loss decreases to 0.01, does total loss decrease to for example 0.001? I know total loss is sum of alpha loss and 0.01*TNET loss. But TNET output is input of MNET and i think better TNET will affect the final result.
-
how can I force TNET loss to remain a small number after changing the train type to 'end-to-end'?
a few lines of my training log is as follow, But the results are not satisfactory.
[403 / 2000] Lr: 0.00001
loss: 0.02102 loss_p: 0.03657 loss_t: 0.27316
[404 / 2000] Lr: 0.00001
loss: 0.02124 loss_p: 0.03699 loss_t: 0.27437
[405 / 2000] Lr: 0.00001
loss: 0.02090 loss_p: 0.03636 loss_t: 0.27218
[406 / 2000] Lr: 0.00001
loss: 0.02079 loss_p: 0.03614 loss_t: 0.27150
[407 / 2000] Lr: 0.00001
loss: 0.02054 loss_p: 0.03571 loss_t: 0.26916
@lizhengwei1992 and @tsing90 can you help me please?
try to modify this line and see whether your problem can be solved: https://github.com/lizhengwei1992/Semantic_Human_Matting/blob/master/model/network.py#L39 bg, unsure, fg = torch.split(trimap_softmax, 1, dim=1)
@tsing90 I see you have this code running.
I don't know how to get the train.txt file. How can I get that file according to my images and mask?
Could you help me?
just write their (images&masks) paths into that file . BTW, I have just released similar code, please refer to my repo
@tsing90 good boy,add attention to you