litellm
litellm copied to clipboard
[Bug]: JSON Mode configuration is different between Ollama and OpenAI models
What happened?
It appears that the way to configure JSON mode between Ollama and OpenAI models is different.
With Ollama, I see that the documentation instructs us to use:
completion(model="ollama/mistral", format="json",...)
However, with OpenAI models, we instead should be using:
completion(model="gpt-4-0125-preview", response_format={ "type" : "json_object" }, ...)
Can I ask, is that correct?
If so, then this appears to be a deviation from the goals of LLM (unifying API calls as an API switchboard). And if so, I would propose that LiteLLM switch to using the OpenAI API spec instead.
Relevant log output
N/A
Twitter / LinkedIn details
@ericmjl, in/ericmjl
seems like a bug - can you point me to where in docs you see this @ericmjl
Hi @krrishdholakia!
I found the Ollama docs specifying JSON mode here:
https://litellm.vercel.app/docs/providers/ollama
I don’t think the OpenAI pages on LiteLLM specify JSON mode in the docs, but I think one can infer that JSON mode can be toggled on by cross-referencing the OpenAI API.
response_json support for ollama should be live in v1.32.7 @ericmjl
Please reopen and bump me if this doesn't work!
response_jsonsupport for ollama should be live in v1.32.7@ericmjlPlease reopen and bump me if this doesn't work!
WHAT? Ollama is still not supported @_@
? https://docs.litellm.ai/docs/providers/ollama