kubespray icon indicating copy to clipboard operation
kubespray copied to clipboard

nf_conntrack_ipv4 not found on Debian 10

Open juanpablofava opened this issue 3 years ago • 9 comments
trafficstars

Environment:

  • Cloud provider or hardware configuration: Bare metal install

  • OS (printf "$(uname -srm)\n$(cat /etc/os-release)\n"): Linux 4.19.0-18-amd64 x86_64 PRETTY_NAME="Debian GNU/Linux 10 (buster)" NAME="Debian GNU/Linux" VERSION_ID="10" VERSION="10 (buster)" VERSION_CODENAME=buster ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/"

  • Version of Ansible (ansible --version): ansible 2.10.15 config file = /home/jfava/kubespray/ansible.cfg configured module search path = ['/home/jfava/kubespray/library'] ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible executable location = /usr/local/bin/ansible python version = 3.8.10 (default, Nov 26 2021, 20:14:08) [GCC 9.3.0]

  • Version of Python (python --version): Python 3.8.10

Kubespray version (commit) (git rev-parse --short HEAD): 25371779

Network plugin used: calico

Error:

nf_conntrack_ipv4 does not exist, now it renamed to: nf_conntrack

juanpablofava avatar Mar 04 '22 18:03 juanpablofava

same here on Ubuntu 20.04

avolution avatar Mar 05 '22 15:03 avolution

Looks like a regression bug: https://github.com/kubernetes-sigs/kubespray/issues/6934

alkmim avatar Mar 12 '22 11:03 alkmim

The detection of nf_conntrack_ipv4 is gated behind a check: https://github.com/kubernetes-sigs/kubespray/blob/master/roles/kubernetes/node/tasks/main.yml#L110-L135 so I'm a bit puzzled about the actual error you are seeing.

Could you run the playbook with -vvv and share the log?

cristicalin avatar Mar 15 '22 17:03 cristicalin

I guess the error message related to nf_conntrack_ipv4 was shown but the deployment itself is succeeded because of ignore_errors: true of the task Modprobe nf_conntrack_ipv4, right? Sometimes I got questions about the error message company internally and I said "please ignore it".

oomichi avatar Mar 22 '22 02:03 oomichi

I can confirm the differences in newer Debian as @juanpablofava stated.

  • Debian 9: nf_conntrack_ipv4 nf_conntrack
  • Debian 10: nf_conntrack
  • Debian 11: nf_conntrack

In Debian 9, nf_conntrack also depends on nf_conntrack_ipv4 so I guess it's still safe in Debian 9 if we just change this line to nf_conntrack.


I've tested running ansible imperative command to invoke modprobe nf_conntrack_ipv4 on my Debian 11 cluster. Error confirmed

$ ansible -i inventory/hosts.yml -m modprobe -a 'name=nf_conntrack_ipv4 state=present' k8s_cluster
m3 | FAILED! => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "msg": "modprobe: FATAL: Module nf_conntrack_ipv4 not found in directory /lib/modules/5.10.0-12-amd64\n",
    "name": "nf_conntrack_ipv4",
    "params": "",
    "rc": 1,
    "state": "present",
    "stderr": "modprobe: FATAL: Module nf_conntrack_ipv4 not found in directory /lib/modules/5.10.0-12-amd64\n",
    "stderr_lines": [
        "modprobe: FATAL: Module nf_conntrack_ipv4 not found in directory /lib/modules/5.10.0-12-amd64"
    ],
    "stdout": "",
    "stdout_lines": []
}

After renaming it, everything work fine.

$ ansible -i inventory/hosts.yml -m modprobe -a 'name=nf_conntrack state=present' k8s_cluster
m3 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "name": "nf_conntrack",
    "params": "",
    "state": "present"
}

rtsp avatar Mar 24 '22 20:03 rtsp

@juanpablofava you see an ignored error, or a real error stopping the deployment ?

champtar avatar Mar 25 '22 03:03 champtar

The deployment stops with error and did not finish. Then I renamed as @rtsp did and run again, everything works fine. The cluster is un production now so i can not reproduce any more.

juanpablofava avatar Mar 25 '22 10:03 juanpablofava

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jun 23 '22 10:06 k8s-triage-robot

In the roles/kubernetes/node/tasks/main.yml you just need to rename all nf_conntrack_ipv4 to nf_conntrack

After this rename it works for me on Debian 11

Accuratiorem avatar Jul 21 '22 21:07 Accuratiorem

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Aug 20 '22 21:08 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

k8s-triage-robot avatar Sep 19 '22 22:09 k8s-triage-robot

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Sep 19 '22 22:09 k8s-ci-robot