Can deep-searcher's responses be streamed?
Can deep-searcher's responses be streamed?
@zhoufengen It is not supported at the moment. If you are interested, please contribute. Thanks a lot
@zhoufengen Can you describe the need details. In which part to be streamed? the final LLM output? or the thought processing logs?
Thank you very much for your reply! All outputs must be streamed back, because if this agent is to provide services to others, long waiting times will cause people to abandon using the service.
@zhoufengen with the existing code being called from the command terminal you should be able to send the output from each iteration to the user as the thought process instead of sending it to the terminal output? There is a different market/expectations for deep-search applications compared to fast chat bots I believe. They should be OK waiting and getting a callback like a slack message later as a tradeoff to get better answers. ChatGPT research sends you an answer after a long while.