CoFiPruning icon indicating copy to clipboard operation
CoFiPruning copied to clipboard

Discrepancy between my evaluation results and README for MNLI in evaluation.py

Open TinaChen95 opened this issue 1 year ago • 4 comments

Hi, I'm running evaluation.py on MNLI as described in the README, but I'm getting different results compared to what's displayed there. I'm using Google Colab for this, and you can find my notebook here: https://colab.research.google.com/drive/1UahAOTIwALfEC_DXE11mVOp5iSgwHoYH?usp=sharing

When I run evaluation.py, it shows the following results: Task: mnli Model path: ../CoFi-MNLI-s95 Model size: 4330279 Sparsity: 0.949 Accuracy: 0.091 Seconds/example: 0.000561

However, in the README file, the results for the same evaluation are different: Task: MNLI Model path: princeton-nlp/CoFi-MNLI-s95 Model size: 4920106 Sparsity: 0.943 mnli/acc: 0.8055 Seconds/example: 0.010151

I need help figuring out why there's a discrepancy between my results and what's described in the README. I've tried to follow the instructions in the README as closely as possible, but I may have missed something. Thank you for any assistance you can provide.

TinaChen95 avatar Mar 07 '23 03:03 TinaChen95