"Stream was already consumed" exception thrown even with enabled buffering.
Upd with a short summary of the whole thread: Not fixed. See a temporary workaround.
Description
We are experiencing "Stream was already consumed" issue, which has been explained previously: https://github.com/microsoft/reverse-proxy/issues/1683
With HTTP/2, for example, we could have already started sending the request body to the server when the server responds with a GOAWAY. The server can tell us that it never processed the request, so HttpClient knows it can try to retry without side effects to the server application.
First, the request is failing with the following error message:
[HandlerMessage] poolId: 50533821, workerId: 0, requestId: 0, memberName: SendWithVersionDetectionAndRetryAsync, message: Retry attempt 1 after connection failure. Connection exception: System.Net.Http.HttpRequestException: The request was aborted.
---> System.Net.Http.HttpProtocolException: The HTTP/2 server closed the connection. HTTP/2 error code 'NO_ERROR' (0x0). (HttpProtocolError)
--- End of inner exception stack trace ---
at System.Net.Http.Http2Connection.ThrowRetry(String message, Exception innerException)
at System.Net.Http.Http2Connection.Http2Stream.TryEnsureHeaders()
at System.Net.Http.Http2Connection.Http2Stream.ReadResponseHeadersAsync(CancellationToken cancellationToken)
at System.Net.Http.Http2Connection.SendAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.SendWithVersionDetectionAndRetryAsync(HttpRequestMessage request, Boolean async, Boolean doRequestAuth, CancellationToken cancellationToken)
The retry attempt fails with this error:
[HandlerMessage] poolId: 50533821, workerId: 19317591, requestId: 0, memberName: SendAsync, message: Sending request content failed: System.InvalidOperationException: Stream was already consumed.
at Yarp.ReverseProxy.Forwarder.StreamCopyHttpContent.SerializeToStreamAsync(Stream stream, TransportContext context, CancellationToken cancellationToken)
at System.Net.Http.Http2Connection.Http2Stream.SendRequestBodyAsync(CancellationToken cancellationToken)
at System.Net.Http.Http2Connection.SendAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken)
Problem
We are already doing buffering of request body for logging purposes using EnableBuffering, but it is not being respected by StreamCopyHttpContent
-
Does it make sense to fix
StreamCopyHttpContent.SerializeToStreamAsyncbehavior, so it won't throw "Stream was already consumed" exception if thestream.CanSeek(or has a type like FileBufferingReadStream) ? -
What's the official workaround to "buffering is needed to be able to retry failed requests" problem? How to by default enable it for all requests to downstream services?
Our current fix is implemented as a transform where we convert content to StreamContent that supports buffering. Is this approach reasonable, or is there a better way to address the issue?
public void Apply(TransformBuilderContext context)
{
context.AddRequestTransform(transformContext =>
{
if (transformContext.ProxyRequest.Content != null)
{
var content = new StreamContent(transformContext.HttpContext.Request.Body);
foreach (var header in transformContext.ProxyRequest.Content.Headers)
{
content.Headers.TryAddWithoutValidation(header.Key, header.Value.ToArray());
}
transformContext.ProxyRequest.Content = content;
}
return ValueTask.CompletedTask;
});
}
Expected behavior
When buffering the request body using EnableBuffering(), proxying the request to downstream services should not result in a "Stream was already consumed" exception.
Similar issues:
- https://github.com/microsoft/reverse-proxy/issues/2022
- https://github.com/microsoft/reverse-proxy/issues/1683
Does it make sense to fix StreamCopyHttpContent.SerializeToStreamAsync behavior, so it won't throw "Stream was already consumed" exception if the stream.CanSeek
It may be tricky (mainly around thread safety), but that might be possible. Most uses won't have requests buffered though.
Our current fix is implemented as a transform where we convert content to StreamContent that supports buffering. Is this approach reasonable, or is there a better way to address the issue?
Please don't override the request.Content. YARP uses a custom content so that we can reliably synchronize things, do proper error reporting, telemetry, etc. If it's replaced, a bunch of stuff will break. We really should just immediately throw here if you change it (may do so in some future version).
What's the official workaround to "buffering is needed to be able to retry failed requests" problem?
There's no "official" support for request retries yet. #56 has a bunch of discussion around how that can be done. Essentially it boils down to resetting the response and calling into YARP again.
Thanks for your explanation, we can check and try to do a RetryMiddleware or something similar, but what I am wondering is why is this not an issue that everyone is facing?
It seems that if http2 is used, then it is just a part of the http2 behavior to send GOAWAY meaning that it is always an issue.
Is there any conceptual problem re-reading the stream again? It seems that in the muti-threaded scenario the current implementation would not work with the same error since it is only allowing for one read.
Can I also ask you what kind of features will break if I replace the content?
It's essentially down to an expected race condition in the protocol - you would only see this if
- The connection was shutting down. Depending on the service this could be very rare with very long-lived connections.
- The request got assigned to that connection right before it shut down (e.g. the request was in-flight). The lower the load, the lower the latency, the lower the chance you'd hit this.
- You're not using
Expect: 100-continue(this one is very common, but just calling it out)
If your scenario happens to be cycling through more connections, it's possible you'd see this more often. It's also often a totally fine response to seeing this exception to return an error to the client and have it retry if needed.
Is there any conceptual problem re-reading the stream again?
No, I don't think so. Buffering in the first place can be problematic from a performance standpoint, doesn't work for all requests, etc., but once you've committed to doing so you should be free to rewind.
Can I also ask you what kind of features will break if I replace the content?
If you look at https://github.com/microsoft/reverse-proxy/blob/main/src/ReverseProxy/Forwarder/HttpForwarder.cs you can search for requestContent.
A few examples:
- Error handling will be wrong (reported
ForwarderError, proxy's response status code) ContentTransferringtelemetry will not be emitted- We use state on the custom
HttpContentto reliably synchronize the request with how the body is read. As a result, you could see thread-safety issues as your custom content logic may still be running (and using things like the HttpContext) when YARP already exits. - We won't signal to the client that we're aborting request reads
- Possible further issues in the future as we're making assumptions that the content wasn't changed
Thanks for an extensive explanation!
We can add Expect: 100-continue if that will fix the issue, is there any potential drawback of that approach? I assume there will be a bit more traffic between YARP and the destination in that case. I also see that in the logs after turning it on:
poolId: 27678033, workerId: 60852325, requestId: 3, memberName: WaitFor100ContinueAsync, message: 100-Continue timer expired. Which I suppose mean the server does not "support" out of the box answering to that header request.
I also would like to check if you have some plans on fixing it "properly", lets say to either have a config to allow for buffering or if the buffering is already enabled (e.g. if the stream CanSeek is true), then YARP will automatically enable "retry" and will not throw the exception?
I also do not quite understand how multi-threading is an issue here, since we can only send one request at a time, we cannot send POST e.g. two times in parallel?
Expect continue involves an extra round-trip to the server, so it does impact latency. If the server doesn't respond to those (and you rely om the 100-Continue timer), that will add tons of latency and should be avoided.
The multi-threading concerns come in with how HttpClient and custom HttpContent interact. A call to SerializeToStreamAsync may happen at any point, potentially just after SendAsync already returns. It may also still be running when SendAsync returns, etc. It's a solvable problem, just not super trivial.
YARP/you could also choose to retry the request at a higher level when seeing this InvalidOperationException.
Making things "just work" for seekable streams makes sense, but as I've said it's not the typical use case to have all requests buffered. I probably won't get around to that for a while.
It would be nice to have a reliable way to detect this error for retry, I had to rely on exact message so far:
public async Task InvokeAsync(HttpContext context)
{
context.Request.EnableBuffering();
await _next(context);
var errorFeature = context.GetForwarderErrorFeature();
if (errorFeature?.Exception?.GetBaseException() is InvalidOperationException { Message: "Stream was already consumed." })
{
context.Features.Set<IForwarderErrorFeature>(null);
context.Request.Body.Position = 0;
context.Response.Clear();
await _next(context);
}
}
Can you advice on how to properly "reset" the proxy/ForwarderMIddleware to allow for retry? My attempt at it is described above.
That seems fine. Note that we might change the exact message though.
Yes, that much is certain, but I do not see any other way to not retry something that we should not. It would be ideal if you would change the exception to a custom type like StreamWasAlreadyConsumedException. That would make it easier at least.
Do you have any estimate as to when we might expect a solution with seekable stream check?
I don't think we'd want to expose a public type for this, at which point you're looking at similar fragileness.
Checking InvalidOperationException && !response.HasStarted &6 !context.RequestAborted.IsCancellationRequested is likely good enough. InvalidOpEx isn't a commonly thrown exception.
The request already failed, so worst case is your retry also fails.
Re: time estimate, no, sorry, but unless we see this meaningfully affecting more users it's unlikely to be any time soon. For most this should be in the noise.
Thanks a lot for your help with the workaround solution for this issue! I assume if you will not fix this anytime soon, then you probably wont change exception type and text either :)
I suppose it is OK to leave this issue open, so when you have time to fix it, you can close it.
[Glad to find this post] We are a microsoft internal team doing a load test with HTTP2 request. And seen this same error two. Our gateway is using YARP 2.1. Seem there is always 3% of request failed with System.InvalidOperationException: Stream was already consumed.
With HTTP/2 I believe you should only be seeing this if one of these is commonly happening:
- The backend server is closing connections on YARP (via HTTP/2 GOAWAY)
- The backend server is lowering the
MaxStreamsPerConnectionlimits under the default 100 - The backend server is refusing requests via HTTP/2 REFUSED_STREAM
Are your load tests set up in a way where that would commonly happen? E.g. constantly restarting backend servers?
Suggested reading for anyone hitting such issues: https://github.com/dotnet/runtime/issues/53914#issuecomment-2827443942
We are using Yarp to authenticate users against Azure AD B2C and then route requests to a Server say X. We have a new requirement to route some of these users to a new server Y (a Subset of these users with certain permissions). So we have a new routing component sitting on server X that examines the http headers and then issues a 307 to server Y. All the Get requests get forwarded correctly but any Post requests with a body are failing with the same exact error message (in the yarp logs). I am following all the suggestion from this post to Enablebuffering, reset the body position to 0 and then retry (tried with multiple retries as well) but this workaround does not seem to work. We are using the latest version of yarp. Any suggestions on how to handle this?
here is our Middleware proxy code which was suggested by @ilya-scale.
public async Task InvokeAsync(HttpContext context) { context.Request.EnableBuffering();
await next(context);
var errorFeature = context.GetForwarderErrorFeature();
if (errorFeature?.Exception?.GetBaseException() is InvalidOperationException { Message: "Stream was already consumed." })
{
context.Features.Set<IForwarderErrorFeature>(null);
context.Request.Body.Position = 0;
context.Response.Clear();
await next(context);
}
}
Somehow, it seems like it is already too late and the request body has been started to be consumed already. Error Details: An error occurred while sending the request. Stream was already consumed.
System.InvalidOperationException System.Net.Http.HttpRequestException: at System.Net.Http.HttpConnection+<SendAsync>d__57.MoveNext (System.Net.Http, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a) at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e) at System.Net.Http.HttpConnectionPool+<SendWithVersionDetectionAndRetryAsync>d__89.MoveNext (System.Net.Http, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a) at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e) at System.Net.Http.DiagnosticsHandler+<SendAsyncCore>d__10.MoveNext (System.Net.Http, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a) at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e) at System.Net.Http.RedirectHandler+<SendAsync>d__4.MoveNext (System.Net.Http, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a) at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e) at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult (System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e) at Yarp.ReverseProxy.Forwarder.HttpForwarder+<SendAsync>d__8.MoveNext (Yarp.ReverseProxy, Version=2.3.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a) Inner exception System.InvalidOperationException handled at System.Net.Http.HttpConnection+<SendAsync>d__57.MoveNext: at Yarp.ReverseProxy.Forwarder.StreamCopyHttpContent+<SerializeToStreamAsync>d__15.MoveNext (Yarp.ReverseProxy, Version=2.3.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a) at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e) at System.Net.Http.HttpContent+<<CopyToAsync>g__WaitAsync|56_0>d.MoveNext (System.Net.Http, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a) at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e) at System.Net.Http.HttpConnection+<SendRequestContentAsync>d__61.MoveNext (System.Net.Http, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a) at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e) at System.Net.Http.HttpConnection+<SendAsync>d__57.MoveNext (System.Net.Http, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a)
@sachinwal the workaround works. We use Yarp 2.3.0.
We tried to use the workaround, and it works im some of the cases. In some other cases, we get a different error (caused by the workaround): System.InvalidOperationException: The response cannot be cleared, it has already started sending.
Any idea on how to fix this?
You should also check that ! context.Response.HasStarted and !context.RequestAborted.IsCancellationRequested before attempting to retry
You should also check that
! context.Response.HasStartedand!context.RequestAborted.IsCancellationRequestedbefore attempting to retry
I see, this prevents the cleared-error, but this leads me back to the consumed-error. I'm using this in conjunction with PowerBI; so it's probably a specific use case where this error occures. I'll look further in to it.