Ahmer Tabassum
Ahmer Tabassum
Hey @hdduytran Thanks for sharing your use case! Just to better understand and help out, could you clarify what exactly you're trying to achieve? Are you aiming to build a...
@shahrukhx01 you can avoid the double response generation that by passing `False` to the `run_llm` parameter. ``` properties = FunctionCallResultProperties(run_llm=False) await result_callback(result, properties=properties) ``` Moreover, the best way to do...
@shahrukhx01 In OpenAI's LLM, we can achieve that by using the `tool_choice` parameter. By setting `tool_choice` to `{ "type": "function", "function": { "name": "my_function" } }`, you can explicitly instruct...
@markbackman I've encountered this issue as well. It happens occasionally. I have a voice bot application in production, and for some users, the bot malfunctions.
@markbackman ,I am still facing latency of at least 3–5 seconds. The only difference is that my prompt is large, as the bot has to handle multiple complex tasks. Additionally,...
@markbackman I was working on other project this past week. I'll try the TTFB in this week and will let you know with the update! Then we'll close it.