lm-evaluation-harness icon indicating copy to clipboard operation
lm-evaluation-harness copied to clipboard

Fix partial caching of openai models

Open ciaranby opened this issue 1 year ago • 6 comments

  • Caching key is generated using all kwargs otherwise there is a cache miss.
  • also a couple of linting changes made by the code formatter

ciaranby avatar Jun 19 '24 15:06 ciaranby

Can someone advise on the unit test fails - it's unclear if they are related to my change?

ciaranby avatar Jun 20 '24 16:06 ciaranby

I don't understand what it is about. However, now caching is not working with openai-completions.

djstrong avatar Jul 24 '24 20:07 djstrong

In the main code, partial caching of openai models works for loglikelihood but not for generate. This PR should fix this.

djstrong avatar Aug 06 '24 09:08 djstrong

And I confirm it works with generate.

djstrong avatar Aug 06 '24 11:08 djstrong

Will aim to review + fix merge conflicts / test failures on this ASAP!

haileyschoelkopf avatar Aug 29 '24 14:08 haileyschoelkopf

Hi! after the API refactor in #2008, cache should work properly!

baberabb avatar Aug 29 '24 23:08 baberabb