kubespray
kubespray copied to clipboard
nf_conntrack_ipv4 not found on Debian 10
Environment:
-
Cloud provider or hardware configuration: Bare metal install
-
OS (
printf "$(uname -srm)\n$(cat /etc/os-release)\n"): Linux 4.19.0-18-amd64 x86_64 PRETTY_NAME="Debian GNU/Linux 10 (buster)" NAME="Debian GNU/Linux" VERSION_ID="10" VERSION="10 (buster)" VERSION_CODENAME=buster ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/" -
Version of Ansible (
ansible --version): ansible 2.10.15 config file = /home/jfava/kubespray/ansible.cfg configured module search path = ['/home/jfava/kubespray/library'] ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible executable location = /usr/local/bin/ansible python version = 3.8.10 (default, Nov 26 2021, 20:14:08) [GCC 9.3.0] -
Version of Python (
python --version): Python 3.8.10
Kubespray version (commit) (git rev-parse --short HEAD):
25371779
Network plugin used: calico
Error:
nf_conntrack_ipv4 does not exist, now it renamed to: nf_conntrack
same here on Ubuntu 20.04
Looks like a regression bug: https://github.com/kubernetes-sigs/kubespray/issues/6934
The detection of nf_conntrack_ipv4 is gated behind a check: https://github.com/kubernetes-sigs/kubespray/blob/master/roles/kubernetes/node/tasks/main.yml#L110-L135 so I'm a bit puzzled about the actual error you are seeing.
Could you run the playbook with -vvv and share the log?
I guess the error message related to nf_conntrack_ipv4 was shown but the deployment itself is succeeded because of ignore_errors: true of the task Modprobe nf_conntrack_ipv4, right?
Sometimes I got questions about the error message company internally and I said "please ignore it".
I can confirm the differences in newer Debian as @juanpablofava stated.
- Debian 9:
nf_conntrack_ipv4nf_conntrack - Debian 10:
nf_conntrack - Debian 11:
nf_conntrack
In Debian 9, nf_conntrack also depends on nf_conntrack_ipv4 so I guess it's still safe in Debian 9 if we just change this line to nf_conntrack.
I've tested running ansible imperative command to invoke modprobe nf_conntrack_ipv4 on my Debian 11 cluster. Error confirmed
$ ansible -i inventory/hosts.yml -m modprobe -a 'name=nf_conntrack_ipv4 state=present' k8s_cluster
m3 | FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"msg": "modprobe: FATAL: Module nf_conntrack_ipv4 not found in directory /lib/modules/5.10.0-12-amd64\n",
"name": "nf_conntrack_ipv4",
"params": "",
"rc": 1,
"state": "present",
"stderr": "modprobe: FATAL: Module nf_conntrack_ipv4 not found in directory /lib/modules/5.10.0-12-amd64\n",
"stderr_lines": [
"modprobe: FATAL: Module nf_conntrack_ipv4 not found in directory /lib/modules/5.10.0-12-amd64"
],
"stdout": "",
"stdout_lines": []
}
After renaming it, everything work fine.
$ ansible -i inventory/hosts.yml -m modprobe -a 'name=nf_conntrack state=present' k8s_cluster
m3 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"name": "nf_conntrack",
"params": "",
"state": "present"
}
@juanpablofava you see an ignored error, or a real error stopping the deployment ?
The deployment stops with error and did not finish. Then I renamed as @rtsp did and run again, everything works fine. The cluster is un production now so i can not reproduce any more.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
In the roles/kubernetes/node/tasks/main.yml you just need to rename all nf_conntrack_ipv4 to nf_conntrack
After this rename it works for me on Debian 11
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen - Mark this issue or PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen- Mark this issue or PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.