nighthawk
nighthawk copied to clipboard
Wasm filter with test server
I am trying to add a wasm http filter to the filter chain. I have used the config.yaml shown in a some examples, and added the wasm filter given in envoy wasm-cc example as shown below -
static_resources:
listeners:
# define an origin server on :10000 that always returns "lorem ipsum..."
- address:
socket_address:
address: 0.0.0.0
port_value: 10000
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
generate_request_id: false
codec_type: AUTO
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: service
domains:
- "*"
routes:
- match:
prefix: "/"
route:
cluster: web_service
http_filters:
- name: test-server # before envoy.router because order matters!
typed_config:
"@type": type.googleapis.com/nighthawk.server.ResponseOptions
response_body_size: 10
v3_response_headers:
- { header: { key: "foo", value: "bar" } }
- {
header: { key: "foo", value: "bar2" },
append: true,
}
- { header: { key: "x-nh", value: "1" } }
- name: envoy.filters.http.wasm
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.wasm.v3.Wasm
config:
name: "my_plugin"
root_id: "my_root_id"
# if your wasm filter requires custom configuration you can add
# as follows
# configuration:
"@type": "type.googleapis.com/google.protobuf.StringValue"
value: |
{}
vm_config:
runtime: "envoy.wasm.runtime.wasmtime"
vm_id: "my_vm_id"
code:
local:
filename: "/home/xyz/envoyrepo/envoy/examples/wasm-cc/lib/envoy_filter_http_wasm_example.wasm"
- name: envoy.filters.http.router
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
dynamic_stats: false
clusters:
- name: web_service
type: strict_dns
lb_policy: round_robin
load_assignment:
cluster_name: service1
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: web_service
port_value: 9000
I have built nighthawk_test_server with wasmtime enabled by adding the extension to extensions_build_config.bzl as well as building it with --define wasm=wasmtime, and the test_server starts as expected.
However, I don't see the expected header/body additions from the wasm filter in the response with a curl request, just those from the test-server filters.
$ curl -v localhost:10000
* Uses proxy env variable no_proxy == 'localhost'
* Trying 127.0.0.1:10000...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 10000 (#0)
> GET / HTTP/1.1
> Host: localhost:10000
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< foo: bar
< foo: bar2
< x-nh: 1
< content-length: 10
< content-type: text/plain
< date: Mon, 30 Aug 2021 17:58:36 GMT
< server: envoy
<
* Connection #0 to host localhost left intact
aaaaaaaaaa
Am I configuring the wasm filter incorrectly?
I don't think anyone tried this with our test server yet, so you might be in uncharted terrain.
What happens when you take the test-server filter out of the equation by removing it from your configuration? Does the wasm filter get hit in that case / do you get the expected output then?
With just the wasm filter in the config file, i see the expected body/header in the response from the proxy. However, the issue is I get a 503 Service Unavailable status message in the response.
Config file -
static_resources:
listeners:
# define an origin server on :10000 that always returns "lorem ipsum..."
- address:
socket_address:
address: 0.0.0.0
port_value: 10000
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
generate_request_id: false
codec_type: AUTO
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: service
domains:
- "*"
routes:
- match:
prefix: "/"
route:
cluster: web_service
http_filters:
- name: envoy.filters.http.wasm
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.wasm.v3.Wasm
#"@type": type.googleapis.com/nighthawk.server.ResponseOptions
config:
name: "my_plugin"
root_id: "my_root_id"
# if your wasm filter requires custom configuration you can add
# as follows
configuration:
"@type": "type.googleapis.com/google.protobuf.StringValue"
value: |
{}
vm_config:
runtime: "envoy.wasm.runtime.wasmtime"
vm_id: "my_vm_id"
code:
local:
filename: "/home/xyz/envoyrepo/envoy/examples/wasm-cc/lib/envoy_filter_http_wasm_example.wasm"
- name: envoy.filters.http.router
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
dynamic_stats: false
clusters:
- name: web_service
type: strict_dns
lb_policy: round_robin
load_assignment:
cluster_name: service1
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: web_service
port_value: 9000
and the curl request with response is -
$ curl -v localhost:10000
* Uses proxy env variable no_proxy == 'localhost'
* Trying 127.0.0.1:10000...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 10000 (#0)
> GET / HTTP/1.1
> Host: localhost:10000
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 503 Service Unavailable
< x-wasm-custom: FOO
< content-type: text/plain; charset=utf-8
< date: Mon, 30 Aug 2021 20:06:14 GMT
< server: envoy
< transfer-encoding: chunked
<
* Connection #0 to host localhost left intact
Hello, worldpstream
I see. I'm not sure if the wasm extension is supposed to generate a 200/OK, this might be just the way it behaves / a shared bug with vanilla Envoy?
As for compatibility with our test server .. the test-server filter takes a shortcut over at https://github.com/envoyproxy/nighthawk/blob/86163f747fc27207675a3bfe1a28e592b8e3447c/source/server/http_test_server_filter.cc#L61 This seemed efficient and worked for what was needed so far. But it robs the opportunity of extensions running after it of their chance of having a say about the response.
I think it is doable to change this, and get it to behave like you expected. But I wonder:
- If and how that impacts performance.
- What other maintainers think about supporting full extensibility: right now we maintain a few extensions associated to the test server (test-server, dynamic-delay,time-tracking) and haven't had to worry about our extensions behaving like good citizens in this regard.
Thanks for the details. My goal is to utilize nighthawk client-server framework and implementation to measure perf impact of one or more filters, wasm filter for example. Does the following make sense -
- NH client sending requests to port 'A'.
- Envoy configured with only wasm filter, listening at port 'A'. It would have clusters at port 'B'.
- NH test-server listening at port 'B' (specify --base-id). It would have dynamic-delay and test-server filters. (This would perhaps have upstream cluster specified to port 'C')
This way I would hopefully get expected wasm filter responses, as well as the test-server responses (and the configurability like concurrency, dynamic delay etc).
Or is there a better way to utilize NH-test-server with extensions.
If you would like to measure impact of (wasm) extensions in Envoy proxy, then the setup you propose above sounds good to me:
nighthawk_client <-> proxy with (wasm) extensions <-> nighthawk_test_server
If you instead stack more (wasm) extensions in the test server in tests, you are measuring impact of extensions at Nighthawk's test server instead of at the proxy, I suspect that's not what you are aiming for.
@rahulchaphalkar Did you manage to test the performace using nighthawk? I'm also getting 503 response.