CRAFT-Reimplementation icon indicating copy to clipboard operation
CRAFT-Reimplementation copied to clipboard

how many epoch do you train when finetuning on IC15

Open kouxichao opened this issue 6 years ago • 26 comments

some questions。 1.how many epoch do you train when finetuning on IC15 2.which epoch, the loss value began to decline significantly。

kouxichao avatar Oct 14 '19 08:10 kouxichao

what's the hmean value on IC15 using pretrained model on synthtext.

kouxichao avatar Oct 14 '19 08:10 kouxichao

@kouxichao IC15 for 500 epochs,MLT for ~65 epochs. I forgot the IC15 hmean, maybe about 56-57%.

backtime92 avatar Oct 15 '19 01:10 backtime92

why is the hmean value of IC15 still very low , and go up and down between 0-0.3, after traing on IC15 for 200 epoch? is that normal?

kouxichao avatar Oct 15 '19 06:10 kouxichao

@kouxichao Abnormal, it should be about ~80% for 200 epoch.

backtime92 avatar Oct 15 '19 06:10 backtime92

你使用IC15进行训练的时候,,,hmean怎么变化的? 刚开始是0.0几,0.1几的情况吗? 我训练的时候为什么hmean一直在震荡,有时候0.1,有时候0.2。

kouxichao avatar Oct 16 '19 07:10 kouxichao

@kouxichao could you share you hmean?

backtime92 avatar Oct 16 '19 07:10 backtime92

this is the loss figure generated in training process: loss_plot_training

the hmean value i didn't save. screenshots of hmean on IC15: image image image image image image image image image image

i retrained on IC15 for several times on IC15 for 100-200epoch, but the hmeans always are like above, i changed the batchsize to 1 for synthtext, and 5 for IC15, because the memory is not enough. (i test the batchsize 2 for synthtext, and 5 for IC15 for 100 epoch , hmean is still like that. for memory reason , i only can train 100 epoch using original batchsize.)

kouxichao avatar Oct 16 '19 08:10 kouxichao

@kouxichao I will check it, and tomorrow give you the reason.

backtime92 avatar Oct 16 '19 08:10 backtime92

res_img_499 res_img_499_mask the output map and result in traing process.

thanks, hope to see your check results!

kouxichao avatar Oct 16 '19 08:10 kouxichao

@kouxichao I am so sorry for that I am so busy today, I will check it at noon tomorrow.

backtime92 avatar Oct 17 '19 09:10 backtime92

never mind, can you share your train info(the hmean and loss of every epoch)?, so that i can find something wrong happened much earlier in training. or can you just simply describe the trend of hmean and loss change. the hmean are always greater than the hmean(0.58) evaluted on pretrained model, or greater than some other value like 0.4?

And this pic is the input of net, is that normal? image

kouxichao avatar Oct 17 '19 09:10 kouxichao

@kouxichao Have you set the model to eval mode when generating the pseudo label code?

lianqing11 avatar Oct 17 '19 13:10 lianqing11

@lianqing11 you mean this eval mode? image

kouxichao avatar Oct 17 '19 23:10 kouxichao

@lianqing11 thanks for @kouxichao ,just use the line 528 code, it will work.

backtime92 avatar Oct 18 '19 01:10 backtime92

image you mean uncomment this line? i retrain with this uncommented, the hmean of epoch 0-17 is still like before, is that normal?

kouxichao avatar Oct 18 '19 02:10 kouxichao

@kouxichao data_loader.py file

backtime92 avatar Oct 18 '19 02:10 backtime92

@kouxichao Have you also set the model to train model --> model.train() after generating pseudo bounding box in every iteration, which may solve the problem.

lianqing11 avatar Oct 18 '19 02:10 lianqing11

you mean put net.train() in for loop? image

kouxichao avatar Oct 18 '19 02:10 kouxichao

@kouxichao I am checking it, just for a moment.

backtime92 avatar Oct 18 '19 02:10 backtime92

@kouxichao sorry, there is no leisure GPU, so i can not check it. Or can you train MLT, first?

backtime92 avatar Oct 18 '19 02:10 backtime92

thanks, i fixed it. i put net.train() at start and net.eval() at end of 'for' code block. it seems to work.

kouxichao avatar Oct 18 '19 05:10 kouxichao

@kouxichao ok,sorry for the problems.

backtime92 avatar Oct 18 '19 05:10 backtime92

thanks for your great work and patience with the problem.

kouxichao avatar Oct 18 '19 06:10 kouxichao

for training epoch, the author trained the model for 16 images one GPU, so for weakly supervision, if you train the model for real image and syndata image with 10:2, for IC15 you should train 500 epoch, for MLT you should keep the iteartions * epoch = 25K.

backtime92 avatar Nov 04 '19 15:11 backtime92

net.train() could you please share this part of code? I mean net.train() and net.eval(). is it possible to use MLT pretrained weight for training custom dataset?

lerndeep avatar Apr 05 '21 04:04 lerndeep

Hi @kouxichao, I also having similar issue but even worse result on 143th epoch:

Loading weights from checkpoint ./data/CRAFT-pytorch/real_weights/CRAFT_clr_143.pth
elapsed time : 86.13623404502869spytorch/test/img_85.jpgg
Calculated!{"precision": 0.0008183306055646482, "recall": 0.00048146364949446316, "hmean": 0.0006062443164595332, "AP": 0}

Could you share that which part of the code you have modified to fix the problem? Thanks

yakhyo avatar Aug 23 '21 07:08 yakhyo