grpcakkastream
grpcakkastream copied to clipboard
Support sending status at end of stream
From @jroper:
By the way, one problem with the current interface that you're providing is that gRPC allows the server to send a status at the end of the response stream. Using a Flow[Request, Response, NotUsed] provides no mechanism for either the server to produce or the client to consume that status message. To allow the status message to be consumed or produced, the flow could materialize to a future of the status message, for example Flow[Request, Response, Future[GrpcStatus]]. This is what we were thinking of implementing in Lagom when we add gRPC support.
I haven't look into sending a final status at the end of the stream but is it something that can be worked around using exception/error handling ?
Something like:
.onComplete {
case Success(_) => obs.onCompleted()
case Failure(t: StatusException) => obs.onError(t)
case Failure(t: StatusRuntimeException) => obs.onError(t)
case Failure(t: Throwable) => obs.onError(Status.fromThrowable(t).asException())
}
Good point, since there's only one success code, the rest are errors, onError can be used with a list of well known exceptions to communicate a particular status code, and onComplete can be used to communicate the success status code.
As long as gRPC doesn't introduce additional success codes (as HTTP has, eg no content, etc), that should be fine.
Hi, nice work. I have a problem though :) I experimented a bit with the library and I have a problem probably with that not propagating of statuses. When I'm using server-streaming client flow and the server is unavailable, the flow just ends and there is no exception anywhere. The status is completely ignored here. Is there a chance to reveal errors while using client flows? Thanks.
Thanks for pointing this out. https://github.com/btlines/grpcakkastream/pull/21 should solve it.