Samuel Colvin
Samuel Colvin
We should make it easier to skip parts or a model, or only send input or validated data, or just errors. What would your preference be @jules-ch?
I thought LiteLLM was a library to give introp between different LLMs, why does it need a proxy?
Should have same JSON path for all providers so queries are easy. Should have a key for `openai`, or `anthropic` etc. to make filtering easy.
Let's start by trying to remove `rich`.
Hi @adsouza, we don't yet support built in provider tools, but work to support them is underway in #1722. If you have any more details on how you want to...
That's useful thanks. Most likely the reason it's slow is that inside OAI the flow is: * LLM: interpret the question, decide to make a tool call to web search...
Yup, we're going on this very thing, see https://github.com/pydantic/pydantic-ai/issues/915 and linked pull request.
Great, feel free to create a pull request, I'll review and we can get it merged and a new release deployed.
This is needed for `pydantic-ai` to use this library rather than the OpenAI compatibility layer, see https://github.com/pydantic/pydantic-ai/issues/242.
We would like to reuse a set of http connections (encapsulated in an HTTPX client) when creating Ollama clients, so it's as cheap as possible to create new clients. Please...