lm-evaluation-harness icon indicating copy to clipboard operation
lm-evaluation-harness copied to clipboard

Using harness with a local internal API

Open asimokby opened this issue 1 year ago • 2 comments

Hello there,

Is there a way to run the evals with a local internal API URL of a model? Do I have to create a class under lm_eval/models and implement some specific models (if so what are they)? Any idea where to start?

Thank you.

asimokby avatar Dec 20 '23 06:12 asimokby

Hi! Assuming that your local API can be called via the same interface as OpenAI via setting openai.OpenAI(base_url=base_url), This will be addressed and documented in #1174 !

haileyschoelkopf avatar Dec 20 '23 13:12 haileyschoelkopf

ChatCompletions support for arbitrary API providers is now merged. The same changes to OpenAIChatCompletionsLM will need to be ported to OpenAICompletionsLM to support non-chat models in this way--this is planned to be added asap, but if you'd like to take this on in the meantime we'd welcome a PR as well!

haileyschoelkopf avatar Dec 20 '23 20:12 haileyschoelkopf

Also, if your internal API is different from the openai one you can still implement custom support! You can use that file as a guide... basically you need to implement the three request types (loglikelihood, loglikelihoodrolling, and generate_until).

StellaAthena avatar Jan 08 '24 14:01 StellaAthena

+1, see https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/model_guide.md for a walkthrough on adding custom support.

haileyschoelkopf avatar Jan 08 '24 14:01 haileyschoelkopf