aws-efs-csi-driver icon indicating copy to clipboard operation
aws-efs-csi-driver copied to clipboard

efs-csi-controller won't start if IMDS access is blocked

Open korbin opened this issue 4 years ago • 16 comments

/kind bug

What happened?

With IMDS disabled per best practices (https://docs.aws.amazon.com/eks/latest/userguide/best-practices-security.html) on Bottlerocket hosts, pods from the efs-csi-controller deployment will not start.

We need something similar for the controller, or for it to just not need IMDS access to begin with.

F0127 18:13:01.145009 1 driver.go:54] could not get metadata from AWS: EC2 instance metadata is not available is emitted to the log and a crash occurs.

What you expected to happen?

I expected efs-csi-controller to start. Passing the region/instance ID/other IMDS-sourced information would be acceptable.

How to reproduce it (as minimally and precisely as possible)?

  • Block IMDS access
  • Deploy efs-csi-controller

Anything else we need to know?:

The DaemonSet uses hostNetwork: true to regain access to the IMDS (https://github.com/kubernetes-sigs/aws-efs-csi-driver/pull/188)

Environment

  • Kubernetes version (use kubectl version): EKS 1.18
  • Driver version: master

korbin avatar Jan 27 '21 18:01 korbin

I was able to get the efs-csi-controller running by removing the liveness check and switching to hostNetwork: true until the IMDS dependency can be resolved.

korbin avatar Feb 01 '21 19:02 korbin

Yes, seems we'll have to add hostNetwork true, thank you for trying and verifying it fixes it.

(I am looking into ways to avoid talking to instance metadata altogther since we only use it for super basic stuff like instance id but not even sure if it's feasible yet)

wongma7 avatar Feb 01 '21 21:02 wongma7

Tried at on premises physical servers environment (not AWS env), and it still throws errors below. Any extra configuration need be be done for this scenario? Thanks.

could not get metadata from AWS: EC2 instance metadata is not available

davidshtian avatar Apr 07 '21 07:04 davidshtian

@korbin Hi Korbin~ Got the same issue for on premises physical servers Kubernetes environment. For this workaround, tried to remove livenessProbe part for container efs-plugin in controller deployment (https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/deploy/kubernetes/base/controller-deployment.yaml), and the error still exist. Also need to remove the container liveness-probe? Thanks.

I was able to get the efs-csi-controller running by removing the liveness check and switching to hostNetwork: true until the IMDS dependency can be resolved.

davidshtian avatar Apr 07 '21 14:04 davidshtian

I've also been running into this.

Another thing to consider, is how to ensure the ports used by the aws-efs-csi-driver do not conflict with the ports used by aws-ebs-csi-driver. Both of these applications seem to use a similar approach, where there is a Deployment and a DaemonSet which require hostNetwork and hostPort to function correctly when IMDS access is blocked.

groodt avatar Apr 26 '21 11:04 groodt

@groodt yes the poor choice of default port definitely needs fixing: https://github.com/kubernetes-sigs/aws-efs-csi-driver/pull/437/files

regarding the need for instance metadata in general we arrived at a fix in ebs and will probably copy the fix over here https://github.com/kubernetes-sigs/aws-ebs-csi-driver/pull/855. The tradeoff is that the driver will need permission to Get Nodes. But that's a read-only permission and can come included in the RBAC artifacts, it wont require any extra work on part ofusers

wongma7 avatar May 03 '21 17:05 wongma7

Thanks! I think that sharing a common approach with the EBS driver makes sense if possible. I think normalising the use of IRSA where possible can only be a good thing, particularly for the AWS provided add-ons and utilities.

groodt avatar May 04 '21 09:05 groodt

@wongma7 Thanks for making progress on the ebs-csi-driver https://github.com/kubernetes-sigs/aws-ebs-csi-driver/issues/821 I've been able to successfully remove access to the hostNetwork for the controller. Any updates on the similar approach for the efs-csi-driver?

I would love to remove hostNetwork access for both EFS and EBS (node and controllers: 4 workloads total). So far, I've only been able to remove hostNetwork for the ebs-csi-controller. (1/4 workloads).

groodt avatar Jun 27 '21 23:06 groodt

I have some updates here. I can confirm that aws-ebs-csi-driver as of v1.3.0 is able to run successfully without hostNetwork using IRSA. https://github.com/kubernetes-sigs/aws-ebs-csi-driver/issues/821#issuecomment-923413504

@wongma7 Is it reasonable to expect that the same will be possible with the aws-efs-csi-driver in future?

groodt avatar Sep 20 '21 22:09 groodt

yes, that is totally reasonable, the EFS driver needs to be able to run without hostnetwork/imds for exactly the same reasons as EBS. The effort entails copying the code and test (an end-to-end test on a "real" EKS cluster with nodes whose IMDS is disabled) from EBS to here. I don't have an ETA but that is my plan

wongma7 avatar Sep 20 '21 23:09 wongma7

That sounds awesome! I'll follow this issue for any updates. 🚀

groodt avatar Sep 20 '21 23:09 groodt

@wongma7 Any updates on this issue ? Really looking forward removing hostNetworking ... ;-)

Quarky9 avatar Dec 08 '21 15:12 Quarky9

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Mar 08 '22 16:03 k8s-triage-robot

/remove-lifecycle stale

niranjan94 avatar Mar 08 '22 18:03 niranjan94

Have raised a PR that I think should resolve this issue here: https://github.com/kubernetes-sigs/aws-efs-csi-driver/pull/681

jonathanrainer avatar Apr 24 '22 09:04 jonathanrainer

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jul 23 '22 10:07 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Aug 22 '22 11:08 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Sep 21 '22 12:09 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Sep 21 '22 12:09 k8s-ci-robot