3D-ResNets-PyTorch
3D-ResNets-PyTorch copied to clipboard
How to use eval_ucf101.py
Hi I trained my model, and now I want to evaluate his performance on ucf101, so I thought using the "eval_ucf101.py" file, but I couldn't understand how to use "UCFclassification" class , what the input arguments, can you give me a hand with that issue? BTW what is the different between evaluating using this file, and evaluating through just do "test" in the arguments?
Thanks!
Hi, May I know if you have figured it out? I am facing the same problem now too.
Also, I am wondering what is the purpose of this eval_ucf101.py file? Isn't the accuracy already stated in the .log files?
Thanks.
I also have this question because I'd like to see if I can reproduce the reported results in the paper. It appears that UCFclassification
expects a json file for both ground truth data and model predictions.
I think we can use one of the generated annotation files (e.g. "ucf101_01.json") as the ground truth file because UCFclassification._import_ground_truth
seems to use the same keys. As for the json file for model predictions, I couldn't find a script to do that part (doing the inference and exporting predictions to a json file) so far.
I was able to run eval_ucf101.py
after specifying results/val.json
as the model prediction file. You can generate this by enabling the test
option in main.py
:
https://github.com/kenshohara/3D-ResNets-PyTorch/blob/94dd85a7a249e4e909864a1a7e201b848867d3c2/opts.py#L160
I also needed to fix a tensor error by referring to the issue #63, and modify some parts of eval_ucf101.py
to run in python 3.
@kenshohara and @pentiumx - What is the difference between the accuracy obtained using the eval scripts in util_scripts and the accuracies in the log files?