node-driver-registrar
node-driver-registrar copied to clipboard
csi-driver-registrar Registration process failed on aws node
Hello. When I follow instructions for deploying csi-driver-registrar on aws cluster (EKS) nodes (EC2) the node-driver-registrar always fails with the following log: I0608 14:48:02.757010 1 main.go:113] Version: v2.1.0-0-g80d42f24 I0608 14:48:02.757918 1 connection.go:153] Connecting to unix:///csi/csi.sock I0608 14:48:02.852446 1 node_register.go:52] Starting Registration Server at: /registration/io.openebs.csi-mayastor-reg.sock I0608 14:48:02.852593 1 node_register.go:61] Registration Server started at: /registration/io.openebs.csi-mayastor-reg.sock I0608 14:48:02.852650 1 node_register.go:83] Skipping healthz server because HTTP endpoint is set to: "" I0608 14:48:04.226354 1 main.go:80] Received GetInfo call: &InfoRequest{} I0608 14:48:04.589035 1 main.go:90] Received NotifyRegistrationStatus call: &RegistrationStatus{PluginRegistered:false,Error:RegisterPlugin error -- plugin registration failed with err: error updating Node object with CSI driver node info: error updating node: timed out waiting for the condition; caused by: detected topology value collision: driver reported "kubernetes.io/hostname":"ip-X-X-X-X" but existing label is "kubernetes.io/hostname":"ip-X-X-X-X.us-east-1.compute.internal",} E0608 14:48:04.589073 1 main.go:92] Registration process failed with error: RegisterPlugin error -- plugin registration failed with err: error updating Node object with CSI driver node info: error updating node: timed out waiting for the condition; caused by: detected topology value collision: driver reported "kubernetes.io/hostname":"ip-X-X-X-X" but existing label is "kubernetes.io/hostname":"ip-X-X-X-X.us-east-1.compute.internal", restarting registration container.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
From the error it looks like the node has a different label to what the driver reported, have you checked that those match?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Feel free to reopen this issue, I didn't see any activity since my last comment