ghz icon indicating copy to clipboard operation
ghz copied to clipboard

Sever streaming method return rpc error: code = DeadlineExceeded desc = context deadline exceeded

Open currysunxu opened this issue 2 years ago • 2 comments

my issue behavior is similar with #226 Proto file(s) service TrService { rpc GetTrafficData(TrRequest) returns (stream TrResponse) {} }

Command line arguments / config ghz --insecure --proto ./trafficproxy.proto --call trafficproxy.TrafficService.GetTrafficData --total=1 --stream-call-duration 10000ms --stream-call-count=10 -d '{xxxxx}' my.company.dns.com:10086

Describe the bug I've similar issue when call grpc server streaming with new options in 0.80.0 release, like -stream-call-duration 10000ms,--stream-call-count=10

Summary: Count: 1 Total: 20.00 s Slowest: 0 ns Fastest: 0 ns Average: 20.00 s Requests/sec: 0.05

Response time histogram:

Latency distribution:

Status code distribution: [DeadlineExceeded] 1 responses

Error distribution: [1] rpc error: code = DeadlineExceeded desc = context deadline exceeded

However, when I use BloomRPC tool with same proto file to call grpc method, which is successful. I mean server streaming response would always return data unless I stop the connection(channel). Could be my parameter usage problem? please kindly give any suggestions , thanks in advance.

To Reproduce Steps to reproduce the behavior:

Expected behavior A clear and concise description of what you expected to happen.

Environment

  • OS: [e.g. macOS 10.14.3]
  • ghz: [e.g. 0.106.1]

currysunxu avatar Mar 11 '22 07:03 currysunxu

Hi there. I was with the same issue and I believe what it is missing here is the timeout 0 to define the infinite timeout. But this will cause another issue which is the cancelled error. But to "solve" that we can add the "count-errors" so then it should kind of work at the end.

whatever, example

ghz --insecure --async \
    --proto ../grpc-service/Protos/greet.proto \
    --call greet.Greeter.KeepTalking \
    --data '{ "name" : "Someone {{.RequestNumber}}" }' \
    --timeout 0 --total 100 --stream-call-count 2 --count-errors \
    --output streaming_keepTalking.html --format html  \
    localhost:5022
  • --timeout 0 sets ghz to infinite timeout
  • --total 100 perform 100 streaming requests
  • --stream-call-count 2 expects 2 returns from each streaming
  • --count-errors counts cancelled events as good events

Result (my example is with a server that returns infinite messages each 10s)

Summary:
  Count:        100
  Total:        10.04 s
  Slowest:      10.04 s
  Fastest:      10.02 s
  Average:      10.03 s
  Requests/sec: 9.96

Response time histogram:
  10020.563 [1]  |∎∎
  10022.154 [16] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
  10023.744 [16] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
  10025.335 [14] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
  10026.926 [17] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
  10028.516 [2]  |∎∎∎∎∎
  10030.107 [4]  |∎∎∎∎∎∎∎∎∎
  10031.698 [8]  |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
  10033.289 [9]  |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
  10034.879 [9]  |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
  10036.470 [4]  |∎∎∎∎∎∎∎∎∎

Latency distribution:
  10 % in 10.02 s 
  25 % in 10.02 s 
  50 % in 10.03 s 
  75 % in 10.03 s 
  90 % in 10.03 s 
  95 % in 10.03 s 
  99 % in 10.04 s 

Status code distribution:
  [Canceled]   100 responses   

Error distribution:
  [100]   rpc error: code = Canceled desc = context canceled   

Hope that helps

kelvinrfr avatar May 03 '22 22:05 kelvinrfr

But I agree that infinite server streaming performance tests are not supported by the tool, and instead, we do this kind of work around.

kelvinrfr avatar May 03 '22 22:05 kelvinrfr