Added option for specifying model config directly, for programmatic use
Removed in huggingface#187
The parameter was added in order to allow passing the config programmatically without having to save the config directly to a file.
Hi! Thanks for the PR - I think something like this will be interesting to add once #236 is merged
Hi! Have you taken a look at the new system, merged in https://github.com/huggingface/lighteval/pull/269 ? Are there still things you want to add now?
Hi! The new system looks great! I haven't had a chance to delve into it yet, but I believe some of the changes I added here are still relevant, although they will probably need to be updated to match the updated main branch.
I seem to have not written the features in the main PR message, the features I added here that are relevant:
1] Support for OpenAI compatible server - for example, if we launch a model using vLLM, we'll be able to just specify a different config and it will use that interface instead of the TGI interface.
2] Retry with backoff - in case a single request fails (e.g., times out), we try again a few times instead of stopping the whole run.
If this is something you think is important, I'm hoping to have some time next week or the week after to work on it to update it to work with the latest main branch.
Hm I think Open AI compatibility was added for the LLM as a judge, so if needed some code could be taken from that! Yep sounds good!
Thanks a lot for your contrib! :)
Hi @shaltielshmid ! Going to close this one as I think it will be easier to start from scratch given the code base changes - but feel free to reopen if you disagree