Server request processing time slow and depends on connection quality
Hello I've noticed, gRPC requests made with YetAnotherHttpHandler are processed slower by backend server. I compared processing time for requests made with
- .NET HttpHandler
- BestHTTP-based HttpHandler (Unity package)
- YetAnotherHttpHandler v1.9
.NET HttpHandler and BestHTTP-based have similar results, while YetAnotherHttpHandler request processing time is 5-10+ times slower in average in my tests. It also looks like request processing time depends on client network connection quality for YetAnotherHttpHandler and I don't see this difference with other HTTP handlers.
Reproduced Environments:
-
backend: docker .NET 8 (linux):
- localhost
- local network
- remote server with k8s, ingress nginx
-
Client: YetAnotherHttpHandler: v1.9
- Windows client
- Linux client (docker)
- Android client
Tested with requests 0.5-2Kb in size.
I created a fork with repro code: https://github.com/pspgit/YetAnotherHttpHandler-PerformanceIssueRepro/tree/ServerRequestPerformance-Repro
Branch: ServerRequestPerformance-Repro Solution: repro/GrpcService1.sln
- ConsoleApp1 - client, running requests with both YetAnotherHttpHandler and .NET HttpHandler
- GrpcService1 - minimal server-side gRPC service. Uses middleware to measure average request processing time per 100 requests for each http handler endpodint separately.
Sample output (local network, wi-fi, low signal):
>> Avg: /greet.Greeter/SayHelloYet: 62.906ms
>> Avg: /greet.Greeter/SayHelloRegular: 0.278ms
>> Avg: /greet.Greeter/SayHelloYet: 13.938ms
>> Avg: /greet.Greeter/SayHelloRegular: 0.050ms
>> Avg: /greet.Greeter/SayHelloYet: 12.532ms
>> Avg: /greet.Greeter/SayHelloRegular: 0.043ms
>> Avg: /greet.Greeter/SayHelloYet: 5.883ms
>> Avg: /greet.Greeter/SayHelloRegular: 0.044ms
>> Avg: /greet.Greeter/SayHelloYet: 18.054ms
>> Avg: /greet.Greeter/SayHelloRegular: 0.063ms
>> Avg: /greet.Greeter/SayHelloYet: 7.107ms
>> Avg: /greet.Greeter/SayHelloRegular: 0.046ms
>> Avg: /greet.Greeter/SayHelloYet: 7.558ms
>> Avg: /greet.Greeter/SayHelloRegular: 0.049ms
>> Avg: /greet.Greeter/SayHelloYet: 9.124ms
>> Avg: /greet.Greeter/SayHelloRegular: 0.048ms
For .NET HttpHandler request processing time doesn't depend on connection quality, it stays in range 0.04-0.3ms. For YetAnotherHttpHandler: server-side request processing takes 1-2ms for locahost and 5ms - up to few seconds for remote server, depending on client connection quality. I simulated bad connection by moving my laptop away from Wi-Fi source.
I understand that bad connection means slower requests send/receive rate, but the issue is it somehow affects request processing time on backend side. While I expect this time to be affected by things like database connection, backend calculations, etc, and not by client send/receive speed.
This also looks related: https://github.com/Cysharp/YetAnotherHttpHandler/issues/1#issuecomment-2220565476 https://github.com/Cysharp/YetAnotherHttpHandler/issues/1#issuecomment-2022253507
-- I've tried to get more info by enabling tracing and found different order of server log events for cases when YetAnotherHttpHandler requests get processed slower.
.NET Http Handler
[2025-01-13 16:29:50.427] trce: Microsoft.AspNetCore.Server.Kestrel.Http2[37]
Connection id "0HN9JTTTEOKF5" received HEADERS frame for stream ID 877 with length 201 and flags END_HEADERS.
[2025-01-13 16:29:50.427] trce: Microsoft.AspNetCore.Server.Kestrel.Http2[37]
Connection id "0HN9JTTTEOKF5" received DATA frame for stream ID 877 with length 1014 and flags NONE.
[2025-01-13 16:29:50.427] trce: Microsoft.AspNetCore.Server.Kestrel.Http2[37]
Connection id "0HN9JTTTEOKF5" received DATA frame for stream ID 877 with length 0 and flags END_STREAM.
[2025-01-13 16:29:50.427] info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/2 POST http://192.168.100.14:32774/greet.Greeter/SayHelloRegular - application/grpc -
[2025-01-13 16:29:50.427] dbug: Microsoft.AspNetCore.Server.Kestrel[25]
Connection id "0HN9JTTTEOKF5", Request id "0HN9JTTTEOKF5:0000036D": started reading request body.
[2025-01-13 16:29:50.427] dbug: Microsoft.AspNetCore.Server.Kestrel[26]
Connection id "0HN9JTTTEOKF5", Request id "0HN9JTTTEOKF5:0000036D": done reading request body.
[2025-01-13 16:29:50.427] info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Request finished HTTP/2 POST http://192.168.100.14:32774/greet.Greeter/SayHelloRegular - 200 - application/grpc 0.2632ms
[2025-01-13 16:29:50.427] trce: Microsoft.AspNetCore.Server.Kestrel.Http2[49]
Connection id "0HN9JTTTEOKF5" sending HEADERS frame for stream ID 877 with length 4 and flags END_HEADERS.
[2025-01-13 16:29:50.427] trce: Microsoft.AspNetCore.Server.Kestrel.Http2[49]
Connection id "0HN9JTTTEOKF5" sending DATA frame for stream ID 877 with length 25 and flags NONE.
YetAnotherHttpHandler
[2025-01-13 16:40:05.701] trce: Microsoft.AspNetCore.Server.Kestrel.Http2[37]
Connection id "0HN9JTTTEOKF7" received HEADERS frame for stream ID 7 with length 28 and flags END_HEADERS.
[2025-01-13 16:40:05.701] info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/2 POST http://192.168.100.14:32774/greet.Greeter/SayHelloYet - application/grpc -
[2025-01-13 16:40:05.701] dbug: Microsoft.AspNetCore.Server.Kestrel[25]
Connection id "0HN9JTTTEOKF7", Request id "0HN9JTTTEOKF7:00000007": started reading request body.
[2025-01-13 16:40:05.702] trce: Microsoft.AspNetCore.Server.Kestrel.Http2[37]
Connection id "0HN9JTTTEOKF7" received DATA frame for stream ID 7 with length 1014 and flags NONE.
[2025-01-13 16:40:05.702] trce: Microsoft.AspNetCore.Server.Kestrel.Http2[37]
Connection id "0HN9JTTTEOKF7" received DATA frame for stream ID 7 with length 0 and flags END_STREAM.
[2025-01-13 16:40:05.703] dbug: Microsoft.AspNetCore.Server.Kestrel[26]
Connection id "0HN9JTTTEOKF7", Request id "0HN9JTTTEOKF7:00000007": done reading request body.
[2025-01-13 16:40:05.703] info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Request finished HTTP/2 POST http://192.168.100.14:32774/greet.Greeter/SayHelloYet - 200 - application/grpc 1.4173ms
[2025-01-13 16:40:05.703] trce: Microsoft.AspNetCore.Server.Kestrel.Http2[49]
Connection id "0HN9JTTTEOKF7" sending HEADERS frame for stream ID 7 with length 4 and flags END_HEADERS.
[2025-01-13 16:40:05.703] trce: Microsoft.AspNetCore.Server.Kestrel.Http2[49]
Connection id "0HN9JTTTEOKF7" sending DATA frame for stream ID 7 with length 21 and flags NONE.
[2025-01-13 16:40:05.703] trce: Microsoft.AspNetCore.Server.Kestrel.Http2[49]
Connection id "0HN9JTTTEOKF7" sending HEADERS frame for stream ID 7 with length 1 and flags END_STREAM, END_HEADERS.
For .NET HttpHandler "received DATA frame" is happening before "Request starting" and "started reading request body". For YetAnotherHttpHandler it happens after "started reading request body" and it takes additional time.
You can uncomment the following lines in GrpcService1/appsettings.json to enable these logs. //"Microsoft.AspNetCore.Hosting.Diagnostics": "Trace", //"Microsoft.AspNetCore.Server.Kestrel": "Trace", //"Microsoft.AspNetCore.Server.Kestrel.Http2": "Trace"
We set up the environment and were able to reproduce the issue.
It appears that the server processing is taking a long time, but actually, the time is being consumed by reading the body. When the network conditions are poor, the body transfer takes longer, and that time is being accounted for.
With SocketsHttpHandler, the body is sent along with the headers, so the subsequent reading time is zero. However, the same amount of time is spent before the request begins (this is not logged in Kestrel).
When measuring the round-trip time from the client, we confirmed that there is no difference between the two implementations.
gRPC message deserialization is processed within the ASP.NET Core middleware pipeline, so it appears to be taking time when reading occurs there. For example, you should be able to get accurate values by measuring only the method using something like gRPC Interceptor.
We believe that this issue can be resolved by sending frames at the same time as request headers, similar to SocketsHttpHandler, but we are not yet sure how well this modification will work.
Further investigation revealed that inserting a 100ms sleep before sending the request causes both HEADERS and DATA to be sent at the same time.
This behavior is not due to the delay in sending data from C# to Rust; it occurs regardless of whether all data is already present on the Rust side, depending solely on whether a sleep is inserted or not.
At this point, I suspect this is due to timing differences in the internal processing of hyper and the async runtime.