pipecat icon indicating copy to clipboard operation
pipecat copied to clipboard

LiteLLM SDK LLM Service

Open danthegoodman1 opened this issue 9 months ago • 8 comments

https://github.com/BerriAI/litellm

Rather than having a ton of individual services with different formats, using the LiteLLM python SDK means changing models is literally as simple as changing a single line without down-stream issues (e.g. different formats for functions)

danthegoodman1 avatar Mar 10 '25 13:03 danthegoodman1

This is an interesting idea. I don't think we want to replace all of the existing LLM services right away, as the devil is often in the details with abstraction layers like this. But we could definitely build a LiteLLMService that sits alongside existing services, and use that as a proving ground of sorts.

chadbailey59 avatar Mar 10 '25 19:03 chadbailey59

Yes definitely, I think just as an option solves it for us.

The problem is not the different LLm processors, it all the different formats they expect

danthegoodman1 avatar Mar 10 '25 19:03 danthegoodman1

Yes, this will be extremely useful for us. We are deploying newer open source model with vllm, which litellm exposes as OpenAI compatible APIs. Having Litellm integration (in pipecat and pipecat flows) will allow us to switch models and test easily.

arpan-reconectai avatar Mar 18 '25 15:03 arpan-reconectai

Need this for Spend Tracking https://docs.litellm.ai/docs/proxy/cost_tracking

kkarkos avatar Aug 13 '25 02:08 kkarkos

Is it something that is going to enter in roadmap? It would be very interesting, especially to add some routing and fallback capacity based on extensive strategy (errors, latency, token consumption)

anotine10 avatar Oct 21 '25 15:10 anotine10

We are definitely interested in this but time hasn't permitted just yet. If anyone is interested, we're happy to collaborate on this.

markbackman avatar Nov 06 '25 21:11 markbackman

@markbackman I am thinking to start with this as I am already familiar with LiteLLM. Just checking If your team is already working on this?

shubhamofbce avatar Nov 10 '25 19:11 shubhamofbce

@shubhamofbce we aren't working on this. LLMs in Pipecat are quite complex and require a good understanding of how Pipecat works. If you decide to work on this, please look at other LLM services and the context aggregators as a reference.

markbackman avatar Nov 10 '25 20:11 markbackman