puffo
puffo
As of a few days ago (~5), it seemed to also be reliably reproducible on macOS. I do see some recent commits which might have inadvertently fixed the problem, perhaps...
Thanks for pointing that out @julian-di. I do however find the current behavior a bit surprising. I'd expect to have it return the cached evals in addition to the cached...
If we are able to get Ollama supported in the embeddings provider, it might also help solve some general failures with local LLM tool usage and [memory](https://github.com/joaomdmoura/crewAI/issues/447). There might be...
The fix is being reviewed here: https://github.com/BerriAI/litellm/pull/1514
Side note: I wasn't able to run the local tests for Ollama after uncommenting them out due to missing images and other odd behaviour with async calls. It might be...
> Any news on this? I also suffer from this issue :) It has been fixed in [this commit](https://github.com/BerriAI/litellm/commit/43f139fafd8e69d81c5fd5d8f95d511e0953c36f) and released in [v1.19.2](https://github.com/BerriAI/litellm/releases/tag/v1.19.2) . Happy to close this if you'd...
Those kinds of options aren't really needed if you're using something like Tailwindcss. You can achieve this quite easily by just adding the necessary [stroke styling classes](https://tailwindcss.com/docs/fill). Here's an example:...