Enhancement: Integration with LiteLLM
Issue: reduce the need for inference providers with continuous-eval.
Advantage: Allow users to use multiple LLMs with fallbacks/caching without building the core infra.
That's a good suggestion! Added to our roadmap, but would love if you could contribute!
Hi @pantonante yes, I'd love to take this up!
What's the best way to reach for a discussion around how to architect this?
That's great, thank you! I will reach out to schedule a meeting
To implement Litellm, I propose we depreciate the llm_factory, and inherit the completion class from litellm to hit the inference APIs for the users directly.