hayhooks
hayhooks copied to clipboard
StreamingResponse
In order to have a good LLM chat UX, we need to streame the response to the client. Langserve is doing this with an dedicated endpoint, hayhooks could do the same (pseudocode):
async def pipeline_stream(pipeline_run_req: PipelineRunRequest) -> StreamingResponse:
buffer = ...
result = pipe.run(data=pipeline_run_req.dict())
return StreamingResponse(buffer_generator)
app.add_api_route(
path=f"/{pipeline_def.name}/stream",
endpoint=pipeline_stream,
methods=["POST"],
name=pipeline_def.name,
response_model=PipelineRunResponse,
)
Additionally haystack should provide a special streaming_callback that will write the chunk content to a buffer, that will be available to hayhooks. Maybe the Pipeline could add this logic and provides an pipe.stream method that will return a generator or simething like this.
Yes @franzwilding we have this item on our roadmap, thanks for raising this issue and voicing your preferred solution.
@vblagoje any idea yet, when this feature will become available? We are using haystack in quite some projects now and want to know if it is worth putting more energy in our work around solution or if we can expect proper streaming out of a pipeline soon :) ?
Yes, I understand totally! The support is currently being worked on 😎
@vblagoje Any updates regarding an ETA for the feature? Thanks in advance for the heads-up
@aymbot on our immediate roadmap for Q3, starting soon 🙏
With this feature implemented, hayhooks would be a strong alternative to langserve. Thanks again for working on it
really need this feature. Is there any recent update? The streaming feature is very important because most of the other third-party UIs or pkgs are called in streaming mode.
any update ?
https://dev.to/arya_minus/async-haystack-streaming-over-fastapi-endpoint-2kj0
if anyone is following this thread
hey @mpangrazzi I believe this feature has already been added in hayhooks right? If so could we close this issue?
@sjrl yes, streaming is supported (sync version). Will async is planned to be supported too, but I guess we can close this.