grpc-swift
grpc-swift copied to clipboard
Backpressure support with Swift Combine
Is your feature request related to a problem? Please describe it.
Currently, there is no way to propagate backpressure from a client-streaming RPC. For example:
service TestService {
rpc CreateStream(stream Input) returns google.protobuf.Empty;
}
is used with generated code that looks like the following:
let stream = client.createStream(handler {_ in})
_ = stream.sendMessage(input)
If the server is not processing messages fast enough, messages will (if I've read the implementation correctly) buffer with no bound in-memory.
For use cases where the data to be transmitted does not necessarily fit in memory, this becomes difficult to work with since no backpressure is propagated to the caller.
Describe the solution you'd like
Swift supports backpressure as a first-class feature in the Combine framework. Subscribers can propagate "demand" sizes to Publishers and Publishers can use this information to decide whether to generate/retrieve data. Example of a possible interface:
let stream = client.createStream(handler {_ in})
publisher.subscribe(stream.createSubscriber())
Describe alternatives you've considered
The caller can track the number of acks received via the EventLoopFuture
returned from sendMessage
and react to the buffer size measured in this way. This approach works, but it's likely that the tuned buffer size will be suboptimal.
If client code is already using Combine in other parts of the application, the extra hop to propagate backpressure signal is a significant overhead.
Additional Context
Thanks for the consideration! Please let me know if I've missed something above or if there is already an established solution for this.
Currently back pressure is propagated using the returned Future
. That Future
will complete when the data has reached the network. To avoid ballooning in memory, implementations should pay attention to that Future
and avoid queueing too much data behind it.
Something we could consider doing is to expose the writability state of the underlying NIO Channel
. This is a less granular feedback mechanism: it flicks over when the pending outbound buffer size gets too large. This would be a lot easier to map into Combine
.
Thanks for the response Lukasa - sounds like the Future
-based approach (similar to the "alternatives considered" in my original post) is the way to go as an immediate solution.
Exposing Channel.isWritable
will be helpful for mapping over to Combine
since it will allow client subscribers to know when to request more data from the publisher.
In addition to this, it would be useful to expose information to allow the Combine
wrapper to determine how much data should be buffered once the channel becomes writeable. I'm not familiar with Swift NIO, but this may be possible by exposing the WriteBufferWaterMark
and write buffer size from the channel and allowing the client to compute a suitable number.
SwiftNIO today exposes the watermarks, but not the current values of those buffers. We could certainly consider an enhancement to do so.