litellm
litellm copied to clipboard
fix(ollama): metrics handling
The previous fix was flawed as the /chat API is different from the /generate API.
While fixing the regression, I noticed that there is inconsistent behaviour in Ollama where the prompt_eval_count
disappears after subsequent requests, yet prompt_eval_duration
is persisted. I believe this inconsistent behaviour is why the workaround is needed in the first place and adds some unnecessary complexity in litellm (which might otherwise not be required).
I suspect this is a bug on Ollama's side and I've opened an issue to confirm my assumptions with the community there. https://github.com/jmorganca/ollama/issues/2068
I'm pushing my changes here in the meantime for review and it can be merged up once we get clarity from the Ollama team.
The latest updates on your projects. Learn more about Vercel for Git ↗︎
Name | Status | Preview | Comments | Updated (UTC) |
---|---|---|---|---|
litellm | ✅ Ready (Inspect) | Visit Preview | 💬 Add feedback | Jan 24, 2024 4:00am |
Side note: I wasn't able to run the local tests for Ollama after uncommenting them out due to missing images and other odd behaviour with async calls. It might be related to these issues, or it might just be confusion on my part.
Perhaps some additional instructions with explicit dependencies at the top of those tests would help making local Ollama testing easier?
Another option is to see if we can get ollama deployed via CLI for testing, but that seems pretty ambitious ;)
@puffo let me know when the PR is ready for review again
Any news on this? I also suffer from this issue :)
Any news on this? I also suffer from this issue :)
It has been fixed in this commit and released in v1.19.2 .
Happy to close this if you'd rather avoid the extra overhead @krrishdholakia
It still persists for me even in 1.19.2 - I tested it yesterday. I'd like to help track it down if it is an issue with LiteLLM.