kpng
kpng copied to clipboard
The ipvs backend segv's if the ipvs module is not loaded
First;
Please do not check modules!
That is a very wrong path, please read https://github.com/kubernetes/kubernetes/issues/108579#issuecomment-1125675626.
But kpng
shouldn't segv for whatever reason so this should be fixed.
In my env I fix this by adding this before starting kpng
;
ipvsadm -Ln > /dev/null || log-error
See, no module checks!
Stack-trace
I0726 06:48:19.622118 959 ipvs.go:397] adding dummy IP 12.0.0.1/32
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x1503911]
goroutine 1 [running]:
github.com/vishvananda/netlink.(*Handle).addrHandle(0x8fd8a0?, {0x0?, 0x0?}, 0xc0003f9258, 0xc0003f9158)
github.com/vishvananda/[email protected]/addr_linux.go:80 +0x51
github.com/vishvananda/netlink.(*Handle).AddrAdd(0x296b720?, {0x0?, 0x0?}, 0x0?)
github.com/vishvananda/[email protected]/addr_linux.go:35 +0xd3
github.com/vishvananda/netlink.AddrAdd(...)
github.com/vishvananda/[email protected]/addr_linux.go:24
sigs.k8s.io/kpng/backends/ipvs-as-sink.(*Backend).addServiceIPToKubeIPVSIntf(0xc0003d2e80, {0xc000404db0, 0x8})
sigs.k8s.io/kpng/backends/ipvs-as-sink/ipvs.go:398 +0x305
sigs.k8s.io/kpng/backends/ipvs-as-sink.(*Backend).AddIP(0xc00038dc04?, 0xc0003ec8f0, {0xc000404db0, 0x8}, 0x7f0ea969b538?)
sigs.k8s.io/kpng/backends/ipvs-as-sink/ipvs.go:140 +0x17f
sigs.k8s.io/kpng/client/serviceevents.(*ServicesListener).diff.func9(0x10?)
sigs.k8s.io/kpng/client/serviceevents/service-events.go:171 +0x4c
sigs.k8s.io/kpng/client/serviceevents.Diff.SlicesLen({0xc0003f9600?, 0xc0003f96f0?, 0x1b43a20?, 0xc0003f9730?}, 0x0, 0x1)
sigs.k8s.io/kpng/client/serviceevents/diff.go:51 +0xf9
sigs.k8s.io/kpng/client/serviceevents.(*ServicesListener).diff(0xc0003cf3e0, 0x0, 0xc0003ec8f0)
sigs.k8s.io/kpng/client/serviceevents/service-events.go:177 +0xcad
sigs.k8s.io/kpng/client/serviceevents.(*ServicesListener).SetService(0xc0003cf3e0, 0xc0003ec8f0)
sigs.k8s.io/kpng/client/serviceevents/service-events.go:90 +0xc8
sigs.k8s.io/kpng/client/serviceevents.wrapper.SetService({{0x1ce93c8?, 0xc0003d2e80?}, 0xc0003cf3e0?}, 0x41?)
sigs.k8s.io/kpng/client/serviceevents/wrap.go:73 +0x48
sigs.k8s.io/kpng/client/localsink/decoder.(*Sink).Send(0xc000113580, 0xc0003fc660?)
sigs.k8s.io/kpng/client/localsink/decoder/decoder.go:88 +0x31f
sigs.k8s.io/kpng/client/localsink/filterreset.(*Sink).Send(0xc0003fc630, 0xc000409400)
sigs.k8s.io/kpng/client/localsink/filterreset/filterreset.go:79 +0x225
sigs.k8s.io/kpng/server/pkg/server/watchstate.(*WatchState).send(0xc00004e690, 0xc0004d0410?)
sigs.k8s.io/kpng/server/pkg/server/watchstate/watchstate.go:95 +0x34
sigs.k8s.io/kpng/server/pkg/server/watchstate.(*WatchState).sendSet(0x498966?, 0x1, {0xc00050d098, 0x12}, {0x1ccda60?, 0xc0004d0410?})
sigs.k8s.io/kpng/server/pkg/server/watchstate/watchstate.go:107 +0x16b
sigs.k8s.io/kpng/server/pkg/server/watchstate.(*WatchState).SendUpdates(0xc00004e690, 0x11c000?)
sigs.k8s.io/kpng/server/pkg/server/watchstate/watchstate.go:69 +0xfd
sigs.k8s.io/kpng/server/jobs/store2localdiff.(*jobRun).SendDiff(0xc00035b1c0?, 0xc00004e690)
sigs.k8s.io/kpng/server/jobs/store2localdiff/store2localdiff.go:118 +0x90
sigs.k8s.io/kpng/server/jobs/store2diff.(*Job).Run(0xc0003f9ca8, {0x1ce4528, 0xc00035a340})
sigs.k8s.io/kpng/server/jobs/store2diff/store2diff.go:81 +0x391
sigs.k8s.io/kpng/server/jobs/store2localdiff.(*Job).Run(0xc0003f12d8, {0x1ce4528, 0xc00035a340})
sigs.k8s.io/kpng/server/jobs/store2localdiff/store2localdiff.go:55 +0xfb
sigs.k8s.io/kpng/cmd/kpng/storecmds.SetupFunc.ToLocalCmd.func2({0x1ce4fa8?, 0xc0003fc630?})
sigs.k8s.io/kpng/cmd/kpng/storecmds/storecmds.go:110 +0x54
sigs.k8s.io/kpng/cmd/kpng/storecmds.LocalCmds.func1(0xc0000e2280?, {0x1a507a7?, 0x1?, 0x1?})
sigs.k8s.io/kpng/cmd/kpng/storecmds/storecmds.go:124 +0x35
github.com/spf13/cobra.(*Command).execute(0xc0000e2280, {0xc000440f20, 0x1, 0x1})
github.com/spf13/[email protected]/command.go:856 +0x67c
github.com/spf13/cobra.(*Command).ExecuteC(0xc00045c780)
github.com/spf13/[email protected]/command.go:974 +0x3b4
github.com/spf13/cobra.(*Command).Execute(...)
github.com/spf13/[email protected]/command.go:902
main.main()
sigs.k8s.io/kpng/cmd/kpng/main.go:55 +0x12f
/kind bug
/cc @VivekThrivikraman-est
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Ohhh I see your original comment gotcha will ask more qs later thx
Kpng (or K8s for that matter) MUST NOT make assumptions on how the kernel is built or about special locations like /boot
.
When a function that requires a module (like ipvs
) is requested the kernel will automatically load modules if needed.
So, to test a function one must do some (dummy) request. A simple "list" (ipvs, iptables, conntrack, etc.) will most likely do.
As a side remark, kpng
MUST NOT set sysctls. That is the main reason why the kube-proxy
container can't run unprivileged.
Below is the kernel config that enables automatic loading of modules;
│ CONFIG_MODPROBE_PATH: │
│ │
│ When kernel code requests a module, it does so by calling │
│ the "modprobe" userspace utility. This option allows you to │
│ set the path where that binary is found. This can be changed │
│ at runtime via the sysctl file │
│ /proc/sys/kernel/modprobe. Setting this to the empty string │
│ removes the kernel's ability to request modules (but │
│ userspace can still load modules explicitly). │
│ │
│ Symbol: MODPROBE_PATH [=/sbin/modprobe] │
│ Type : string │
│ Defined at kernel/module/Kconfig:248 │
│ Prompt: Path to modprobe binary │
│ Depends on: MODULES [=y] │
│ Location: │
│ Main menu │
│ -> Enable loadable module support (MODULES [=y]) │
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Please see https://github.com/kubernetes/kubernetes/pull/114669
/reopen
@jayunit100: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@daman1807: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/assign
/remove-lifecycle rotten
/close
@daman1807: Closing this issue.
In response to this:
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.