litellm icon indicating copy to clipboard operation
litellm copied to clipboard

[Bug]: JSON Mode configuration is different between Ollama and OpenAI models

Open ericmjl opened this issue 1 year ago • 2 comments

What happened?

It appears that the way to configure JSON mode between Ollama and OpenAI models is different.

With Ollama, I see that the documentation instructs us to use:

completion(model="ollama/mistral", format="json",...)

However, with OpenAI models, we instead should be using:

completion(model="gpt-4-0125-preview", response_format={ "type" : "json_object" }, ...)

Can I ask, is that correct?

If so, then this appears to be a deviation from the goals of LLM (unifying API calls as an API switchboard). And if so, I would propose that LiteLLM switch to using the OpenAI API spec instead.

Relevant log output

N/A

Twitter / LinkedIn details

@ericmjl, in/ericmjl

ericmjl avatar Mar 19 '24 00:03 ericmjl

seems like a bug - can you point me to where in docs you see this @ericmjl

krrishdholakia avatar Mar 19 '24 01:03 krrishdholakia

Hi @krrishdholakia!

I found the Ollama docs specifying JSON mode here:

https://litellm.vercel.app/docs/providers/ollama

I don’t think the OpenAI pages on LiteLLM specify JSON mode in the docs, but I think one can infer that JSON mode can be toggled on by cross-referencing the OpenAI API.

ericmjl avatar Mar 19 '24 11:03 ericmjl

response_json support for ollama should be live in v1.32.7 @ericmjl

Please reopen and bump me if this doesn't work!

krrishdholakia avatar Mar 20 '24 05:03 krrishdholakia

response_json support for ollama should be live in v1.32.7 @ericmjl

Please reopen and bump me if this doesn't work!

WHAT? Ollama is still not supported @_@

HyperUpscale avatar Apr 04 '24 01:04 HyperUpscale

? https://docs.litellm.ai/docs/providers/ollama

krrishdholakia avatar Apr 04 '24 02:04 krrishdholakia