cluster-api
cluster-api copied to clipboard
:bug: Set nofile ulimit for loadbalancer container
Hardcode a nofile ulimit when running the load balancer container. I set the limit to a quite high number, but I don't think it's required to be that high for CAPD clusters.
This change is intended to solve: https://github.com/docker-library/haproxy/issues/194 which impacts CAPD on Fedora, and possibly other linux distros. In future the addition of Resource setting to the run container config structs could be used to set other kinds of limits e.g. Memory, CPU.
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign timothysc for approval by writing /assign @timothysc in a comment. For more information see:The Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
/hold
Need to be sure this doesn't impact functionality on other systems. I might be safter to set this ulimit at a low-middle level e.g. 8000-20000 rather than a high level in case some platform setups have upper limits.
65536 is fairly conservative. The upper limit is based on the highest value of an unsigned int (65536), 10% RAM in KB and the value of the NR_FILE compilation variable, so 65536 will be the lowest number.
That said, you're likely on Fedora to want to set a systemd limit for docker as you're going to run into other issues anyway. I've just rebuilt my desktop, and need to check how i did that.
EDIT: I think this is actually related to cgroupsv2, so we will see it more as more things default to cgroupsv2
EDIT: I think this is actually related to cgroupsv2, so we will see it more as more things default to cgroupsv2
I'm not sure - the issue only started impacting me in recent months and seemed to be dependent on Docker / containerd version. Maybe it's linked to a config there.
65536 is fairly conservative. The upper limit is based on the highest value of an unsigned int (65536), 10% RAM in KB and the value of the NR_FILE compilation variable, so 65536 will be the lowest number.
But I wonder if it's still enough. Considering that 65536 open files sounds like a lot for a haproxy in a CAPD cluster. (but I have no idea how much it usually uses, is there an easy way to check that?)
@killianmuldoon Considering: https://github.com/haproxy/haproxy/issues/1751#issuecomment-1162562114
What about:
- Bumping our kindest/haproxy image from v20210715-a6da3463 to v20220607-9a4d8d2a (both are using haproxy 2.2.9)?
- Trying to get haproxy bumped to a recent version in kind? (or maybe as a first test, trying to build the image with a new haproxy version and testing if the issue would disappear)
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
@killianmuldoon: PR needs rebase.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle rotten - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
I've tested with HAProxy 2.6 (HAProxy version 2.6.9-1~bpo11+1 2023/02/15 - https://haproxy.org/ to be precise) and I still hit this issue locally.
I have built capd with this patch, using existing HAProxy image, and it works like a charm.
Is there anything I can do to help get this merged?
Is there anything I can do to help get this merged?
I need time to get back to this :upside_down_face: . I've been using the workaround of setting the ulimits globally in my docker config for now while we figure out the right way to do this. Currently as a workaround I have the following in my systemd unit file
ExecStart=/usr/bin/dockerd --default-ulimit nofile=65883:65883 -H fd:// --containerd=/run/containerd/containerd.sock
The default-ulimit arg takes care of this problem, but I would like to configure haproxy correctly so it's not an issue for other users.
BTW - if you're interested in picking this up I'm happy to hand it over!
The last time we discussed this, I was ok with the upper number. Especially for MacOS, the worst thing that happens is you have to restart your Docker Engine. And this is all local execution so there's no remote attack vector.
I described the the root cause in https://github.com/kubernetes-sigs/kind/issues/2954#issuecomment-1453826595, and fixed it in https://github.com/kubernetes-sigs/kind/pull/3115.
In light of that, I don't think we should change the file descriptor limit here, unless we have other reasons to do so.
I presume the next kind release will have the fix above. For now, I use a workaround similar to what @killianmuldoon described in https://github.com/kubernetes-sigs/cluster-api/pull/7344#issuecomment-1448672738.
/close
This has been partially fixed by #8246. The current state is that the haproxy image will init and likely crash on startup, but once CAPD is writing the config it will be stable.
The final fix will be to pick up a new kindest/haproxy image once they publish one, or to move to haproxytech/haproxy-alpine, either of which include the maxconn haproxy.cfg. I'll open an issue to track that update.
@killianmuldoon: Closed this PR.
In response to this:
/close
This has been partially fixed by #8246. The current state is that the haproxy image will init and likely crash on startup, but once CAPD is writing the config it will be stable.
The final fix will be to pick up a new kindest/haproxy image once they publish one, or to move to haproxytech/haproxy-alpine, either of which include the maxconn haproxy.cfg. I'll open an issue to track that update.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.