guardrails
guardrails copied to clipboard
Make LiteLLM the default llm api
Hi @krrishdholakia my comment on parity is incorrect. I'm not sure what I meant, but I might have meant the parity within our codebase - we didn't support streaming for litellm.
I've also changed the next steps here.
For 0.5.0, I want to make litellm the DEFAULT llm handler we provide first class support for. We will change all runbooks etc to use litellm. We will make the llm_api parameter in guard calls optional, and pass through all args provided in that call directly to an internal litellm chat client.
Originally posted by @zsimjee in https://github.com/guardrails-ai/guardrails/discussions/680#discussioncomment-9348176
- [x] Use the openai interface for our main callable - Guard.call. We do not need to explicitly do this. Instead, we can take all args and kwargs and pass them through to the litellm sdk
- [ ] use the same interface within validators that use LLMs
- [ ] support batch and async litellm workflows
- [x] Make the llm_callable param in Guard.call optional. When not provided, but an arg is passed that litellm uses to determine the model (the
model
arg) then automatically create and use a LiteLLM client. For async, use acreate and attach to the event loop if it exists by now. - [ ] make changes in the Guardrails API that lets users pass the same params over the wire, automatically use a generated LiteLLM client to make llm requests on the server
- [x] make sure custom callables still work
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 14 days.
This issue was closed because it has been stalled for 14 days with no activity.