Output involving tool calls cannot be rendered.
I have created an agent that can write a complete article based on keywords. However, when it comes to tool calls, there is no output on the page. Normal question-and-answer interactions can output normally.
I am using litellm to locally deploy LLM, and by checking the litellm log, the requests involving tool calls have complete output. What could be the possible causes for this?
Which model are you using?
@arkml I'm using Qwen3 - 32B deployed via litellm, and I'm encountering the same issue as with gpt-4o(proxy).
@arkml Okay, after I switched the model to Qwen2.5 - 32B, everything functioned normally. I guess it's because the thinking mode was enabled in Qwen3 - 32B.