CRAFT-Reimplementation
CRAFT-Reimplementation copied to clipboard
how many epoch do you train when finetuning on IC15
some questions。 1.how many epoch do you train when finetuning on IC15 2.which epoch, the loss value began to decline significantly。
what's the hmean value on IC15 using pretrained model on synthtext.
@kouxichao IC15 for 500 epochs,MLT for ~65 epochs. I forgot the IC15 hmean, maybe about 56-57%.
why is the hmean value of IC15 still very low , and go up and down between 0-0.3, after traing on IC15 for 200 epoch? is that normal?
@kouxichao Abnormal, it should be about ~80% for 200 epoch.
你使用IC15进行训练的时候,,,hmean怎么变化的? 刚开始是0.0几,0.1几的情况吗? 我训练的时候为什么hmean一直在震荡,有时候0.1,有时候0.2。
@kouxichao could you share you hmean?
this is the loss figure generated in training process:

the hmean value i didn't save. screenshots of hmean on IC15:

i retrained on IC15 for several times on IC15 for 100-200epoch, but the hmeans always are like above, i changed the batchsize to 1 for synthtext, and 5 for IC15, because the memory is not enough. (i test the batchsize 2 for synthtext, and 5 for IC15 for 100 epoch , hmean is still like that. for memory reason , i only can train 100 epoch using original batchsize.)
@kouxichao I will check it, and tomorrow give you the reason.
the output map and result in traing process.
thanks, hope to see your check results!
@kouxichao I am so sorry for that I am so busy today, I will check it at noon tomorrow.
never mind, can you share your train info(the hmean and loss of every epoch)?, so that i can find something wrong happened much earlier in training. or can you just simply describe the trend of hmean and loss change. the hmean are always greater than the hmean(0.58) evaluted on pretrained model, or greater than some other value like 0.4?
And this pic is the input of net, is that normal?

@kouxichao Have you set the model to eval mode when generating the pseudo label code?
@lianqing11 you mean this eval mode?

@lianqing11 thanks for @kouxichao ,just use the line 528 code, it will work.
you mean uncomment this line? i retrain with this uncommented, the hmean of epoch 0-17 is still like before, is that normal?
@kouxichao data_loader.py file
@kouxichao Have you also set the model to train model --> model.train() after generating pseudo bounding box in every iteration, which may solve the problem.
you mean put net.train() in for loop?

@kouxichao I am checking it, just for a moment.
@kouxichao sorry, there is no leisure GPU, so i can not check it. Or can you train MLT, first?
thanks, i fixed it. i put net.train() at start and net.eval() at end of 'for' code block. it seems to work.
@kouxichao ok,sorry for the problems.
thanks for your great work and patience with the problem.
for training epoch, the author trained the model for 16 images one GPU, so for weakly supervision, if you train the model for real image and syndata image with 10:2, for IC15 you should train 500 epoch, for MLT you should keep the iteartions * epoch = 25K.
net.train() could you please share this part of code? I mean net.train() and net.eval(). is it possible to use MLT pretrained weight for training custom dataset?
Hi @kouxichao, I also having similar issue but even worse result on 143th epoch:
Loading weights from checkpoint ./data/CRAFT-pytorch/real_weights/CRAFT_clr_143.pth
elapsed time : 86.13623404502869spytorch/test/img_85.jpgg
Calculated!{"precision": 0.0008183306055646482, "recall": 0.00048146364949446316, "hmean": 0.0006062443164595332, "AP": 0}
Could you share that which part of the code you have modified to fix the problem? Thanks