Michael Struwig
Michael Struwig
Gently bumping this, since it hasn't seen activity in a while, and also happens to affects me. :)
In offline discussion with @alexanderchiu, he managed to find what looks to be the suspected cause: There is an unpinned version of `openai` in `llama-index` ([offending commit](https://github.com/run-llama/llama_index/commit/81265e7d6043e643c5f0dc03d5056e0ba4da0781)). Because our `llama-index`...
Error handling for input hallucinations is being tracked here: https://github.com/jackmpcollins/magentic/issues/211
This would be a great addition. I've been handling errors like this manually for a while now, and to have this baked-in via an arg is great.
Just bumping here for visibility, since I have the same issue attempting to build a linux/amd64 image using an M-series ARM CPU (M2Pro).
@krrishdholakia Absolutely, here are the verbose logs: ```python Request to litellm: litellm.completion(model='perplexity/mistral-7b-instruct', messages=[{'role': 'user', 'content': 'Hello!'}]) self.optional_params: {} kwargs[caching]: False; litellm.cache: None Final returned optional params: {'extra_body': {}} self.optional_params: {'extra_body':...
Thanks for the catch @edwinjosegeorge 🙈 , I think it was just a late night for me.
Not sure if it's related, but I'm also unable to manually handle function calls (although as you'll see below, it would also likely apply to `prompt_chain` if it were working...
Hi @jackmpcollins, It's been a couple weeks, I'm just following up on this again.
I've been playing with this a little bit locally, and I think it might make the most sense to have something like a `HybridStream` response type that is a merge...