lm-evaluation-harness
lm-evaluation-harness copied to clipboard
cache not storing predictions
The "--use_cache" argument only seems to be caching the model and not the predictions (contrarily to what is indicated in the readme). I am missing something here, or is this not currently implemented? I am running into the problem of reaching the time limit on a run and therefore all predictions are lost, whereas if they could be cached, then I would be able to start the run where I left off.
Thank you for your help!