aws-ebs-csi-driver
aws-ebs-csi-driver copied to clipboard
panic: could not get number of attached ENIs
/kind bug
What happened?
Hey all! im trying to install your Aws-ebs-csi-driver by that guide https://docs.aws.amazon.com/eks/latest/userguide/csi-iam-role.html created all the roles and policies.
on a quick look at the ebs-csi-node pod at my k8s env i can see that i get that error from ebs-plugin container : `I0628 10:44:05.130666 1 metadata.go:85] retrieving instance data from ec2 metadata I0628 10:44:05.135264 1 metadata.go:92] ec2 metadata is available panic: could not get number of attached ENIs
goroutine 1 [running]: github.com/kubernetes-sigs/aws-ebs-csi-driver/pkg/driver.newNodeService(0xc0000c6f00) /go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/pkg/driver/node.go:86 +0x269 github.com/kubernetes-sigs/aws-ebs-csi-driver/pkg/driver.NewDriver({0xc000609f30, 0x8, 0x55}) /go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/pkg/driver/driver.go:95 +0x38e main.main() /go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/cmd/main.go:46 +0x365`
im using v1.7.0-eksbuild.0 deriver version and 1.20 k8s version. do you now how can i solve it ? Thanks !
#1237
Hi @idanl21, are you deploying the driver on Nitro instances?
@torredil dont think so, im deploying it on i3.xlarge or i3en.xlarge instances
This happened to me when I was using a patched version of kube2iam to disable sensitive metadata. It was preventing the /{version}/meta-data/network/interfaces/macs/ endpoint from being passed through. Resolving that issue made this error go away.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.