cluster-api-provider-aws icon indicating copy to clipboard operation
cluster-api-provider-aws copied to clipboard

Unable to install on EKS with custom CNI

Open li3 opened this issue 2 years ago • 7 comments

What steps did you take and what happened:

  1. Create an EKS cluster
  2. Remove aws vpc CNI
  3. Install a custom CNI like Cilium or Calico running in tunnel mode
  4. Run clusterctl init
  5. Install will timeout waiting for cert manager to become ready.

What did you expect to happen:

Actually I think this is expected given the current state of things. With EKS the control plane cannot participate in the custom CNI, because of that the API server cannot reach the pod running the webhook out of the box.

in this situation cert manager recommends running the webhook pod with host networking so the API server can reach it.

This is a common problem with all web hooks and EKS with custom CNI.

Anything else you would like to add:

Not sure the best course of action, but if I could skip the cert manager install then I could handle setting that up beforehand.

Or if CAPI could be installed easily via a helm chart or kustomize manifests.

Environment:

  • most recent clusterctl version
  • Amazon EKS

/kind bug [One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels]

li3 avatar May 01 '22 18:05 li3

It might be a good idea to raise this issue over at the Cluster API Provider AWS repo to get input from people who really know the ins and outs of EKS and Amazon's networking solution.

if I could skip the cert manager install then I could handle setting that up beforehand.

I'm sure I don't understand the full context here, but with clusterctl if you already have cert manager installed installation should be skipped automatically.

killianmuldoon avatar May 05 '22 15:05 killianmuldoon

should be skipped automatically.

Ahh interesting, we never got that to work reliably.

We did work around this issues by dropping our dependency on clusterctl. We used kustomize to pull in the cluster-api-components.yaml and patched over all the environment variable substitutions + host networking bits for all the web hooks. We did the same for the infrastructure providers that we needed. Not a great solution but kept us moving for now.

li3 avatar May 05 '22 17:05 li3

+1 to transfer on the Cluster API Provider AWS repo (@li3 could you take care of this, I don't have grant to do so with the transfer issue feature) @sedefsavas @pydctw @richardcase if there are changes we can do in order to simplify UX for managed providers, we should consider to add them on the ongoing proposal

fabriziopandini avatar May 08 '22 21:05 fabriziopandini

/transfer cluster-api-provider-aws

richardcase avatar Jun 08 '22 11:06 richardcase

@li3: This issue is currently awaiting triage.

If CAPA/CAPI contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Jun 08 '22 11:06 k8s-ci-robot

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Sep 06 '22 11:09 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Oct 06 '22 12:10 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Nov 05 '22 12:11 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Nov 05 '22 12:11 k8s-ci-robot