nighthawk icon indicating copy to clipboard operation
nighthawk copied to clipboard

Support gRPC

Open oschaaf opened this issue 5 years ago • 9 comments

Add the capability to load-test the gRPC protocol.

oschaaf avatar Apr 23 '19 20:04 oschaaf

Hey there @oschaaf! :wave: We're looking forward to add support for gRPC load generation using Nighthawk in Meshery. I'd like to know what's the current status on this. Is there a way which will let me generate any load on a gRPC service using Nighthawk?

DelusionalOptimist avatar Aug 24 '21 21:08 DelusionalOptimist

Hi @DelusionalOptimist and thank you for reaching out. The gRPC support has been de-prioritized in favor of other Nighthawk features for now. While this remains on our roadmap, we don't plan to start working on it in the near future.

With that said, assuming you are interested in contributing this, I would be happy to support your efforts via discussions, code reviews, or other means you might find useful. Is this what you had in mind / would be interested in?

mum4k avatar Aug 25 '21 04:08 mum4k

Thanks @mum4k for the update. Its ok that this is not on priority for now. We'll follow suit and work on integrating other cool features that nighthawk offers in the meantime. :smile:

To your other question, thanks for asking, I would've loved to work on this but I'm not well-positioned to do so right now :sweat_smile:. Though I'll make sure to bring this up with the community so as to find other potential contributors.

DelusionalOptimist avatar Aug 27 '21 14:08 DelusionalOptimist

Thank you and that sounds great. From our end, we will keep you posted when we get closer to implementing this.

mum4k avatar Aug 27 '21 18:08 mum4k

@mum4k Any updates here? I'm trying to use Nighthawk as the load-generator and test server. After setting up the simple nighthawk gRPC service refer to https://github.com/envoyproxy/nighthawk#nighthawk-grpc-service, and I failed to use nighthawk client to send request to this service and here is the error log:

./nighthawk_client -v debug 127.0.0.1:8443
[00:07:24.240580][2049658][D] Unable to use runtime singleton for feature envoy.restart_features.use_apple_api_for_dns_lookups
[00:07:24.240726][2049658][D] create DNS resolver type: envoy.network.dns_resolver.cares
[00:07:24.240792][2049658][I] Starting 1 threads / event loops. Time limit: 5 seconds.
[00:07:24.240801][2049658][I] Global targets: 100 connections and 5 calls per second.
[00:07:24.244664][2049658][D] dns resolution for 127.0.0.1 started
[00:07:24.328722][2049658][D] dns resolution for 127.0.0.1 completed with status 0
[00:07:24.328751][2049658][D] DNS resolution complete for 127.0.0.1 (1 entries, using 127.0.0.1:8443).
[00:07:24.329832][2049658][D] Unable to use runtime singleton for feature envoy.http.headermap.lazy_map_min_size
[00:07:24.344398][2049658][D] Computed configuration: static_resources {
  clusters {
    name: "0"
    type: STATIC
    connect_timeout {
      seconds: 30
    }
    circuit_breakers {
      thresholds {
        max_connections {
          value: 100
        }
        max_pending_requests {
          value: 1
        }
        max_requests {
          value: 100
        }
        max_retries {
        }
      }
    }
    load_assignment {
      cluster_name: "0"
      endpoints {
        lb_endpoints {
          endpoint {
            address {
              socket_address {
                address: "127.0.0.1"
                port_value: 8443
              }
            }
          }
        }
      }
    }
    typed_extension_protocol_options {
      key: "envoy.extensions.upstreams.http.v3.HttpProtocolOptions"
      value {
        [type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions] {
          common_http_protocol_options {
            max_requests_per_connection {
              value: 4294937295
            }
          }
          explicit_http_config {
            http_protocol_options {
            }
          }
        }
      }
    }
  }
}
stats_flush_interval {
  seconds: 5
}

[00:07:24.344884][2049663][D] completionThread running
[00:07:24.345270][2049658][D] transport socket match, socket default selected for host with address 127.0.0.1:8443
[00:07:24.345304][2049658][D] initializing Primary cluster 0 completed
[00:07:24.345308][2049658][D] init manager Cluster 0 contains no targets
[00:07:24.345309][2049658][D] init manager Cluster 0 initialized, notifying ClusterImplBase
[00:07:24.345317][2049658][D] adding TLS cluster 0
[00:07:24.345393][2049658][D] membership update for TLS cluster 0 added 1 removed 0
[00:07:24.345402][2049658][D] cm init: init complete: cluster=0 primary=0 secondary=0
[00:07:24.345404][2049658][D] maybe finish initialize state: 0
[00:07:24.345407][2049658][D] cm init: adding: cluster=0 primary=0 secondary=0
[00:07:24.345408][2049658][D] maybe finish initialize state: 1
[00:07:24.345410][2049658][D] maybe finish initialize primary init clusters empty: true
[00:07:24.345546][2049664][D] adding TLS cluster 0
[00:07:24.345552][2049665][D] completionThread running
[00:07:24.345575][2049664][D] membership update for TLS cluster 0 added 1 removed 0
[00:07:24.979296][2049664][D] queueing stream due to no available connections
[00:07:24.979313][2049664][D] trying to create new connection
[00:07:24.979316][2049664][D] creating a new connection
[00:07:24.979447][2049664][D] [C0] connecting
[00:07:24.979452][2049664][D] [C0] connecting to 127.0.0.1:8443
[00:07:24.979553][2049664][D] [C0] connection in progress
[00:07:24.979565][2049664][D] [C0] connected
[00:07:24.980061][2049664][D] [C0] connected on local interface 'lo'
[00:07:24.980067][2049664][D] [C0] connected
[00:07:24.980100][2049664][D] [C0] attaching to next stream
[00:07:24.980103][2049664][D] [C0] creating stream
[00:07:24.980239][2049664][D] [C0] Error dispatching received data: http/1.1 protocol error: HPE_INVALID_CONSTANT
[00:07:24.980245][2049664][D] [C0] closing data_to_write=0 type=1
[00:07:24.980247][2049664][D] [C0] closing socket: 1
[00:07:24.980264][2049664][D] [C0] disconnect. resetting 1 pending requests
[00:07:24.980267][2049664][D] [C0] request reset
[00:07:24.980274][2049664][D] [C0] client disconnected, failure reason:
[00:07:24.980281][2049664][D] invoking idle callbacks - is_draining_for_deletion_=false
[00:07:24.980288][2049664][E] Exiting due to failing termination predicate
[00:07:24.980292][2049664][I] Stopping after 101 ms. Initiated: 1 / Completed: 1. (Completion rate was 9.895895182678224 per second.)
[00:07:24.980297][2049664][D] [C0] destroying stream: 0 remaining
[00:07:25.120941][2049658][E] Terminated early because of a failure predicate.
[00:07:25.120964][2049658][I] Check the output for problematic counter values. The default Nighthawk failure predicates report failure if (1) Nighthawk could not connect to the target (see 'benchmark.pool_connection_failure' counter; check the address and port number, and try explicitly setting --address-family v4 or v6, especially when using DNS; instead of localhost try 127.0.0.1 or ::1 explicitly), (2) the protocol was not supported by the target (see 'benchmark.stream_resets' counter; check http/https in the URI, --h2), (3) the target returned a 4xx or 5xx HTTP response code (see 'benchmark.http_4xx' and 'benchmark.http_5xx' counters; check the URI path and the server config), or (4) a custom gRPC RequestSource failed. --failure-predicate can be used to relax expectations.
Nighthawk - A layer 7 protocol benchmarking tool.

Queueing and connection setup latency (1 samples)
  min: 0s 000ms 862us | mean: 0s 000ms 862us | max: 0s 000ms 862us | pstdev: 0s 000ms 000us

Response body size in bytes (1 samples)
  min: 0 | mean: 0.0 | max: 0 | pstdev: 0.0

Initiation to completion (1 samples)
  min: 0s 001ms 036us | mean: 0s 001ms 036us | max: 0s 001ms 036us | pstdev: 0s 000ms 000us

Counter                                 Value       Per second
benchmark.stream_resets                 1           9.90
cluster_manager.cluster_added           1           9.90
default.total_match_count               1           9.90
membership_change                       1           9.90
runtime.load_success                    1           9.90
runtime.override_dir_not_exists         1           9.90
sequencer.failed_terminations           1           9.90
upstream_cx_destroy                     1           9.90
upstream_cx_destroy_local               1           9.90
upstream_cx_destroy_local_with_active_rq1           9.90
upstream_cx_destroy_with_active_rq      1           9.90
upstream_cx_http1_total                 1           9.90
upstream_cx_protocol_error              1           9.90
upstream_cx_rx_bytes_total              46          455.21
upstream_cx_total                       1           9.90
upstream_cx_tx_bytes_total              40          395.84
upstream_rq_pending_total               1           9.90
upstream_rq_total                       1           9.90

[00:07:25.121261][2049664][D] Joining completionThread
[00:07:25.121267][2049665][D] completionThread exiting
[00:07:25.121350][2049664][D] Joined completionThread
[00:07:25.121398][2049664][D] shutting down thread local cluster manager
[00:07:25.123361][2049658][D] destroying dispatcher worker_thread
[00:07:25.123395][2049658][D] ClusterImplBase destroyed
[00:07:25.123398][2049658][D] init manager Cluster 0 destroyed
[00:07:25.123412][2049658][D] Joining completionThread
[00:07:25.123445][2049663][D] completionThread exiting
[00:07:25.123491][2049658][D] Joined completionThread
[00:07:25.123747][2049658][D] shutting down thread local cluster manager
[00:07:25.123847][2049658][E] An error ocurred.
[00:07:25.123866][2049658][D] Nighthawk destroyed
[00:07:25.123871][2049658][D] init manager RTDS destroyed
[00:07:25.123873][2049658][D] RTDS destroyed
[00:07:25.123901][2049658][D] destroyed access loggers
[00:07:25.123921][2049658][D] init manager nh_init_manager destroyed
[00:07:25.123925][2049658][D] destroying dispatcher main_thread

If Nighthawk client doesn't support gPRC, I will try to use Nighthawk test server to instead of this gRPC service.

gyohuangxin avatar Apr 18 '22 16:04 gyohuangxin

@gyohuangxin this issue is related to implementing a feature where Nighthawk will be able to send gRPC load to load test gRPC servers. This feature hasn't been implemented yet.

The problem you seem to be facing is related to running Nighthawk as a gRPC service. That is fully functional. Would you mind opening a new issue where we can discuss the problems you encountered?

mum4k avatar Apr 18 '22 18:04 mum4k

@mum4k Thanks for clarifying this. Yes, running Nighthawk as a gRPC service is functional, but Nighthawk client cannot send gPRC request to this service. So this issue is also related to the feature you said.

gyohuangxin avatar Apr 19 '22 01:04 gyohuangxin

We're looking at this area again. I'm commenting just to inquire as to whether there has been a change in this area since the last update.

leecalcote avatar Jun 15 '23 13:06 leecalcote

Thank you for reaching out @leecalcote, there has been no change regarding gRPC support since the last update.

mum4k avatar Jun 15 '23 21:06 mum4k