APIcast icon indicating copy to clipboard operation
APIcast copied to clipboard

[THREESCALE-8410] Add support to set proxy buffer size

Open tkan145 opened this issue 1 year ago • 0 comments

What

Fix https://issues.redhat.com/browse/THREESCALE-8410

Notes

What are the differences between buffers?

  • proxy_buffers: this is the total size of the buffer that Nginx can use to hold the response from upstream. If the size of the response larger then the total size of the proxy_buffers Nginx will write response to disk
  • proxy_buffer_size: mainly used to hold the response header.
  • proxy_busy_buffer_size: size of the buffer can be used for delivering the response to clients while it was not fully read from the upstream server.

What are the correct values for the buffers?

  • proxy_buffers:
    • Default: 8 4k|8k;
    • Min: must be at least 2 buffers
    • Max: no limit
  • proxy_buffer_size:
    • Min: No limit, can be set to smaller default value (4k|8k) but it's not recommend.
    • Max: No limit, but should be no less than the maximum possible size of response HTTP headers
    • Default: one memory page size (4k|8k)
  • proxy_busy_buffer_size:
    • Min: can't be smaller than a single proxy_buffers and must be equal to or greater than the maximum of the value of proxy_buffer_size and one of the proxy_buffers.

    • Max: must be less than the total value of proxy_buffers minus one buffer. (ie 8*4 = 32k - 4k = 28k)

    • Default: if not explicitly defined, the value for proxy_busy_buffer_size is "the bigger of: twice proxy_buffer_size and the size of two proxy_buffers". This also mean if you set bigger proxy_buffer_size, you are implicitly increasing proxy_busy_buffer_size as well.

Reference: https://github.com/nginx/nginx/blob/master/src/http/modules/ngx_http_proxy_module.c#L3442

Why 4k|8k?

This is equal to one memory page size, ie either 4K or 8K, depending on a platform.

How to check my pagesize

$ getconf PAGE_SIZE

Would increase buffer size also increase the memory consumption?

Yes the buffer is allocated per connection. How much you may ask? I honestly don't know, once I get the profiling tool sorted I'll run a few benchmark tests.

Increase the buffer number vs increase the buffer size

The difference between a bigger number of smaller buffers, or a smaller number of bigger buffers, may depend on each user use case, ie a lot of small size response vs a lot of big response. As well as how much memory they have and how much memory they want to be wasted. So it's hard to provide one solution to fit all.

Due to the above complex rule, I personally think we should just provide one setting and increase the buffer size instead of messing around with the number and size of the buffer. And memory is cheap.

The downside of this approach is if user set a really big buffer size, ie proxy_buffers: 8 1024k ie allocating a 1MB buffer for every buffered connection even the response can fit in the default memory page size (4k|8k). However from my initial test, nginx allow allocate needed memory, again I will need to get those profiling tools sorted so I can peek into what is allocated.

Does this setting apply per product?

No this setting is global.

Common errors:

upstream sent too big header while reading response header from upstream 

proxy_buffer_size is the only directive that needs tuning in order to solve the error. However due to the rule described above, proxy_busy_buffer_size also needs adjustment

Verification steps

  • Checkout this branch
  • Edit docker-compose-devel.yaml as follow
diff --git a/docker-compose-devel.yml b/docker-compose-devel.yml
index 6e118560..a5ddbb2d 100644
--- a/docker-compose-devel.yml
+++ b/docker-compose-devel.yml
@@ -23,3 +23,7 @@ services:
       GIT_COMMITTER_EMAIL: ${GIT_COMMITTER_EMAIL:-""}
   redis:
     image: redis
+  my-upstream:
+    image: mccutchen/go-httpbin
+    expose:
+      - "8080"
  • Create a apicast-config.json file with the following content
cat <<EOF >apicast-config.json
{
    "services": [
        {
            "id": "1",
            "backend_version": "1",
            "proxy": {
                "hosts": [
                    "one"
                ],
                "api_backend": "http://my-upstream:8080/response-headers",
                "backend": {
                    "endpoint": "http://127.0.0.1:8081",
                    "host": "backend"
                },
                "policy_chain": [
                    {
                        "name": "apicast.policy.apicast"
                    }
                ],
                "proxy_rules": [
                    {
                        "http_method": "GET",
                        "pattern": "/",
                        "metric_system_name": "hits",
                        "delta": 1,
                        "parameters": [],
                        "querystring_parameters": {}
                    }
                ]
            }
        }
    ]
} 
EOF
  • Checkout this branch and start dev environment
make development
make dependencies
  • Run apicast locally
THREESCALE_DEPLOYMENT_ENV=staging APICAST_LOG_LEVEL=debug APICAST_WORKER=1 APICAST_CONFIGURATION_LOADER=lazy APICAST_CONFIGURATION_CACHE=0 THREESCALE_CONFIG_FILE=apicast-config.json ./bin/apicast
  • Capture apicast IP
APICAST_IP=$(docker inspect apicast_build_0-development-1 | yq e -P '.[0].NetworkSettings.Networks.apicast_build_0_default.IPAddress' -)
  • Generate big header
LARGE_HEADER=$(for i in {1..1024}; do echo -n 'ABCDE'; done)
  • Send request with big header
curl -i -k -H "Host: one" -H "Accept: application/json" "http://${APICAST_IP}:8080/?key=${LARGE_HEADER}&user_key="

It should return 502

HTTP/1.1 502 Bad Gateway
Server: openresty
Date: Thu, 20 Jun 2024 08:31:46 GMT
Content-Type: text/html
Content-Length: 154
Connection: keep-alive

and this line from the log

upstream sent too big header while reading response header from upstream
  • Stop the gateway
CTRL-C
  • Start gateway again with APICAST_PROXY_BUFFER_SIZE=8k
THREESCALE_DEPLOYMENT_ENV=staging APICAST_LOG_LEVEL=debug APICAST_WORKER=1 APICAST_CONFIGURATION_LOADER=lazy APICAST_CONFIGURATION_CACHE=0 THREESCALE_CONFIG_FILE=apicast-config.json APICAST_PROXY_BUFFER_SIZE="8k" ./bin/apicast
  • Send request again
curl -i -k -H "Host: one" -H "Accept: application/json" "http://${APICAST_IP}:8080/?key=${LARGE_HEADER}&user_key="

This time it should return HTTP/1.1 200 OK

HTTP/1.1 200 OK
Server: openresty
Date: Thu, 20 Jun 2024 09:04:58 GMT
Content-Type: application/json; charset=utf-8
Transfer-Encoding: chunked

tkan145 avatar Jun 17 '24 08:06 tkan145