Fix partial caching of openai models
- Caching key is generated using all kwargs otherwise there is a cache miss.
- also a couple of linting changes made by the code formatter
Can someone advise on the unit test fails - it's unclear if they are related to my change?
I don't understand what it is about.
However, now caching is not working with openai-completions.
In the main code, partial caching of openai models works for loglikelihood but not for generate. This PR should fix this.
And I confirm it works with generate.
Will aim to review + fix merge conflicts / test failures on this ASAP!
Hi! after the API refactor in #2008, cache should work properly!