Hamza
Hamza
> If anyone else comes across until its fixed for good, here is an ugly workaround: > > ```python > if "tool_calls" in message or message["role"] == "tool": > context.add_message(message)...
I still get this error when using an openai LLM. Not sure why
I have a similar issue, Im not sure what embedding to use with groq if VoyageAI isnt working the best
What embeddings can be used that are free? Also are you saying that reducing the rate limit option should fix it?
Any idea when this'll be able to be merged?