arrivals icon indicating copy to clipboard operation
arrivals copied to clipboard

Stream upstream response to client on demand

Open dvgica opened this issue 6 years ago • 0 comments

Previously, the HttpProxy eagerly consumed the upstream response before streaming it to the client. This was necessary because it was never guaranteed that the client would consume the whole response and thus pull it through the connection slot. If the response was not consumed, the upstream would be backpressured and the connection slot was never released back to the pool. Eventually the API gateway would run out of connection slots.

In Akka HTTP 10.1.x, the new connection pool implementation automatically clears any slot that doesn't have its entity consumed. See response-entity-subscription-timeout for a description of this mechanism on https://doc.akka.io/docs/akka-http/current/configuration.html.

Streaming the response continuously, instead of eagerly consuming it and then streaming it to the client, is preferable from both a memory consumption and response latency point of view.

RC release for now, to go through testing.

dvgica avatar Dec 13 '19 21:12 dvgica