Intermediate evaluation metrics
I'm trying to run the TAPIR training from scratch with low GPU resources and am adhering to the training configurations specifications from the paper (with gradient accumulation as mentioned here: https://github.com/google-deepmind/tapnet/issues/132#issuecomment-2567534143), and would possibly not be able to complete the training run.
Would it be possible to get intermediate evaluation metrics on the TapVid Datasets from your run to compare and see how our run is progressing?
Hi @swarnim-j ,
Thank you for the patience. Here is the intermediate evaluation metrics on TAPVid-DAVIS strided evaluation for training a tapir model with 100K steps, hope it helps.
Average Jaccard:
Position Accuracy:
Occlusion Accuracy:
Here are the intermediate metrics on TAPVid-DAVIS query first evaluation:
Average Jaccard:
Position Accuracy:
Occlusion Accuracy: