vsphere-csi-driver icon indicating copy to clipboard operation
vsphere-csi-driver copied to clipboard

vsphere-csi-controller version 2.3.1 can not be startup with selinux as enforcing, but could be up with permissive mode

Open zhoudayongdennis opened this issue 2 years ago • 2 comments

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

/kind feature

What happened: after upgrade vsphere csi to 2.3.1, this issue could be reproduced.

What you expected to happen: vsphere CSI should be up with enforcing selinux configuration

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • csi-vsphere version: 2.3.1
  • vsphere-cloud-controller-manager version:
  • Kubernetes version: 1.21
  • vSphere version: 6.7.3
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

here is the corresponding logs.

kube-system vsphere-csi-controller-7d8b64d7d8-h895r 5/6 CrashLoopBackOff 1068 3d kube-system vsphere-csi-controller-7d8b64d7d8-tnhmj 5/6 CrashLoopBackOff 1070 3d

kubectl logs -n kube-system vsphere-csi-controller-7d8b64d7d8-h895r -c vsphere-csi-controller

{"level":"error","time":"2022-04-02T12:31:51.180068548Z","caller":"node/manager.go:123","msg":"Couldn't find VM instance with nodeUUID , failed to discover with err: virtual machine wasn't found","TraceId":"e643aca0-8175-44a9-8754-e1c3e7afdd06","stacktrace":"sigs.k8s.io/vsphere-csi-driver/v2/pkg/common/cns-lib/node.(*defaultManager).DiscoverNode\n\t/root/go/src/github.com/kubernetes-csi/vsphere-csi-driver/pkg/common/cns-lib/node/manager.go:123\nsigs.k8s.io/vsphere-csi-driver/v2/pkg/common/cns-lib/node.(*defaultManager).RegisterNode\n\t/root/go/src/github.com/kubernetes-csi/vsphere-csi-driver/pkg/common/cns-lib/node/manager.go:107\nsigs.k8s.io/vsphere-csi-driver/v2/pkg/common/cns-lib/node.(*Nodes).nodeAdd\n\t/root/go/src/github.com/kubernetes-csi/vsphere-csi-driver/pkg/common/cns-lib/node/nodes.go:61\nk8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd\n\t/root/go/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:231\nk8s.io/client-go/tools/cache.(*processorListener).run.func1\n\t/root/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:777\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/root/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/root/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/root/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/root/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90\nk8s.io/client-go/tools/cache.(*processorListener).run\n\t/root/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:771\nk8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1\n\t/root/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:73"} {"level":"error","time":"2022-04-02T12:31:51.180116402Z","caller":"node/manager.go:109","msg":"failed to discover VM with uuid: "" for node: "tesla-auto-2203-v67-os7-centos-v22-control-02"","TraceId":"e643aca0-8175-44a9-8754-e1c3e7afdd06","stacktrace":"sigs.k8s.io/vsphere-csi-driver/v2/pkg/common/cns-lib/node.(*defaultManager).RegisterNode\n\t/root/go/src/github.com/kubernetes-csi/vsphere-csi-driver/pkg/common/cns-lib/node/manager.go:109\nsigs.k8s.io/vsphere-csi-driver/v2/pkg/common/cns-lib/node.(*Nodes).nodeAdd\n\t/root/go/src/github.com/kubernetes-csi/vsphere-csi-driver/pkg/common/cns-lib/node/nodes.go:61\nk8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd\n\t/root/go/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:231\nk8s.io/client-go/tools/cache.(*processorListener).run.func1\n\t/root/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:777\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/root/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/root/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/root/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/root/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90\nk8s.io/client-go/tools/cache.(*processorListener).run\n\t/root/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:771\nk8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1\n\t/root/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:73"}

for other containers in pod vsphere-csi-controller-7d8b64d7d8-h895r, e.g. liveness, resizer or provisioner containers as below error.

W0402 12:11:08.451634 1 connection.go:173] Still connecting to unix:///csi/csi.sock

zhoudayongdennis avatar Apr 05 '22 11:04 zhoudayongdennis

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jul 04 '22 11:07 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Aug 03 '22 12:08 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

k8s-triage-robot avatar Sep 02 '22 13:09 k8s-triage-robot

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Sep 02 '22 13:09 k8s-ci-robot