Visual-Template-Free-Form-Parsing
Visual-Template-Free-Form-Parsing copied to clipboard
Issue regarding the test images
I was trying to evaluate, the detections on test images. But when I passed around 50 images in a folder, I was able to print/detect only 10 images and remaining images were not getting detected.
I'm not sure I understand what's going on. It looks like the model was run on 50 images (wrote 50 different jsons). What to you mean by "I was able to detect only 10 images"?
What is the specific command your using with eval.py
?
And what is your desired output? Is the jsons what you want, or is it something else?
I am passing the command for printing the test images in empty json format where the output result I got is 50 json's but only 10 images.
!python eval.py -c saved/detector/checkpoint-iteration150000.pth -g 0 -n 10 -d /content/result -a save_json=/content/result,pretty=true -T
Are you seeing all 50 jsons, but only 10 images?
eval.py
has two loops for running saveFunc
, one that passes the destination directory, the other doesn't (and thus doesn't save the image). For some reason it might be switching to the non-saving one early. Either go into the code and change this manually or try setting -n 99999999
(arbitrarly big).
You could also remove the ,end='\r'
from eval.py to get more helpful prints.
I coded up somethings badly in eval.py
which causes weird behavoirs when the normal and validation batch sizes are different. I've updated the code so now it goes off of the validation batch size alone and doesn't to the training set. Pull and try it.
Yes, its working accurately but I want to know about the evaluation of results as well. Here in my case the results are coming in this way it will be great if you can explain bit more about evaluation:
ap_5 overall mean: 0.4649767390956719, std 0.0 recall overall mean: [1. 0.72727273], std [0. 0.] prec overall mean: [1. 0.66666667], std [0. 0.] nn_loss overall mean: 0.016735432669520378, std 0.0 class0_ap overall mean: 0.5, std 0.0 class1_ap overall mean: 0.37713032581453626, std 0.0
ap_5 overall mean
is the mean average precision (mAP). the class0_ap
and class1_ap
are the mAP for the individual classes. The recall overall and prec overall are the recall and precision (averaged per document) for the two classes you're detecting seperately (hence there are two numbers).