BentoML icon indicating copy to clipboard operation
BentoML copied to clipboard

feat: Response streaming over gRPC

Open Bec-k opened this issue 2 years ago • 7 comments

Feature request

Would be nice to have a streaming feature for generation API, so that response would stream token per token and won't wait until full response is generated. gRPC have built-in support for streaming responses, proto code generation also does that. Only work is required in your server, to pipe tokens into the stream.

Motivation

This feature would allow to stream response while it is generating, instead of waiting until it is fully generated.

Other

No response

Bec-k avatar Jun 23 '23 10:06 Bec-k

This would requires BentoML gRPC feature to support streaming, which it is not currently

aarnphm avatar Jun 23 '23 11:06 aarnphm

Streaming is now supported via SSE. gRPC streaming will requires streaming support for gRPC on BentoML. I'm going to transfer this to BentoML for now since SSE should be sufficient enough for most use case.

aarnphm avatar Sep 06 '23 19:09 aarnphm

Any documentation is available for that?

Bec-k avatar Sep 07 '23 08:09 Bec-k

I guess this? https://docs.bentoml.org/en/latest/guides/streaming.html

Bec-k avatar Sep 07 '23 08:09 Bec-k

Streaming is now supported via SSE. gRPC streaming will requires streaming support for gRPC on BentoML. I'm going to transfer this to BentoML for now since SSE should be sufficient enough for most use case.

Well, not really. There are a lot of AI pipelines happening internally between servers. There are mostly kafka or gRPC streaming used to communicate between them.

Bec-k avatar Sep 07 '23 08:09 Bec-k

@aarnphm Any roadmap or plan to support grpc streaming?

npuichigo avatar Sep 09 '23 10:09 npuichigo

Hi @npuichigo @Bec-k - I would love to connect and hear more about your use case regarding gRPC streaming support, this could really help the team & community to prioritize supporting it. Could you drop me a DM in our community Slack?

parano avatar Sep 19 '23 00:09 parano