Track response rate for streaming calls
Is your feature request related to a problem? Please describe. My API uses server streaming to send message to clients and I can't view the response rate without viewing server logs. This would be useful to load test my API.
Describe the solution you'd like Track and output response rate similarly as requests/second is displayed.
Summary:
Count: 100
Total: 120.01 s
Slowest: 110.45 s
Fastest: 14.74 s
Average: 99.97 s
Requests/sec: 0.83
Responses/sec: 587 <- new
Detailed output formats should track response rate per request/subscriber. A response rate histogram/distribution would also be very beneficial.
Describe alternatives you've considered
Additional context
I would be interested in the same feature. We're having a server-streaming API, which streams millions of response messages for a single RPC call. We are looking for ways to get insight into the number of response messages per sec.
Current output for an example streaming 10M+ response messages gives:
Summary:
Count: 1
Total: 264.17 s
Slowest: 264.17 s
Fastest: 264.17 s
Average: 264.17 s
Requests/sec: 0.00
Response time histogram:
264169.921 [1] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
264169.921 [0] |
264169.921 [0] |
264169.921 [0] |
264169.921 [0] |
264169.921 [0] |
264169.921 [0] |
264169.921 [0] |
264169.921 [0] |
264169.921 [0] |
264169.921 [0] |
Latency distribution:
10 % in 264.17 s
25 % in 264.17 s
50 % in 264.17 s
75 % in 264.17 s
90 % in 264.17 s
95 % in 264.17 s
99 % in 264.17 s
Status code distribution:
[OK] 1 responses
The example command was:
bin/ghz --insecure \
--call=Foo/Bar \
--data-file=input.json \
--concurrency=1 \
--total=1 \
--timeout=20m \
grpcserver:9090
We are also interested in the same feature. Could I ask if you could possibly share the config arguments or config you are using to get to the above output? Many thanks
@mml21 good to know there is more interest. I've added the example command to my previous post.
Hello, I understand how this is a missing feature and that it would be useful. My time is pretty limited right now, but hopefully soon will try and get to this. As a far from ideal workaround if really necessary... one could use the debug logging option, where each response message is logged in the JSON lines log file with time stamps of each message, then analyze the file and aggregate message response logs to collect server messages received each second based on log time stamps.
@bojand thanks for your reply! I really appreciate all the hard work and would be happy to contribute, but unfortunately my experience with Go is pretty limited...
One thing that would be extra valuable in the context of large stream responses is the addition of basic bandwidth stats, so we have insight in the efficiency of using Protobuf as wire protocol.
I have something working locally that splits the summary into request and response summaries with count, rate, size and speed. I haven't had time to complete it.
Request Summary:
Count: 1
Total: 257.15 ms
Slowest: 255.30 ms
Fastest: 255.30 ms
Average: 255.30 ms
Rate: 3.89/s
Size: 14 B
Speed: 1 bps
Response Summary:
Count: 100
Rate: 388.88/s
Size: 33.8 KiB
Speed: 16.8 Kbps
Response time histogram:
255.304 [1] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
255.304 [0] |
255.304 [0] |
255.304 [0] |
255.304 [0] |
255.304 [0] |
255.304 [0] |
255.304 [0] |
255.304 [0] |
255.304 [0] |
255.304 [0] |
Latency distribution:
0 % in 0 ns
0 % in 0 ns
0 % in 0 ns
0 % in 0 ns
0 % in 0 ns
0 % in 0 ns
0 % in 0 ns
Status code distribution:
[OK] 1 responses
I am also interested in this feature. @steven-sheehy it would be great if you could share an example of your changes!