vsphere-csi-driver icon indicating copy to clipboard operation
vsphere-csi-driver copied to clipboard

Avoid access to vsphere API from Pods in the cluster

Open sathieu opened this issue 3 years ago • 7 comments

What happened:

When using vSphere Container Storage Plugin, a set of privileges is needed on vSphere.

In our environnement, the vSphere API access is restricted from trusted subnets, and the Kubernetes nodes are not in those subnets (even control plane nodes). We can add another trusted Kubernetes cluster in those restricted subnets with access to both the vSphere API and the Kubernetes API of the workload clusters

What you expected to happen:

Ability to move parts of the vSphere Container Storage Plugin in a management cluster, and ensure only this cluster needs access to the vSphere API.

How to reproduce it (as minimally and precisely as possible):

Install in a cluster without access to the vSPhere API -> it fails.

sathieu avatar May 09 '22 09:05 sathieu

@sathieu: The label(s) /label feature cannot be applied. These labels are supported: api-review, tide/merge-method-merge, tide/merge-method-rebase, tide/merge-method-squash, team/katacoda, refactor

In response to this:

/label feature

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar May 09 '22 09:05 k8s-ci-robot

@sathieu Have you explored Tanzu Kubernetes Guest Cluster - https://cormachogan.com/2020/09/29/deploying-tanzu-kubernetes-guest-cluster-in-vsphere-with-tanzu/

CSI driver running in the Tanzu Guest Cluster does not make a connection to the vCenter server.

divyenpatel avatar May 19 '22 18:05 divyenpatel

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Aug 17 '22 20:08 k8s-triage-robot

Possibly OAuth - if added fully in vSphere 8 - could at least remove the need to provide credentials. Also it would make it possible to couple access keys with a limited set

@sathieu Have you explored Tanzu Kubernetes Guest Cluster - https://cormachogan.com/2020/09/29/deploying-tanzu-kubernetes-guest-cluster-in-vsphere-with-tanzu/

CSI driver running in the Tanzu Guest Cluster does not make a connection to the vCenter server.

How's that possible? Doesn't the CSI driver require access to the API to create volumes and attach/detatch volumes to/from cluster nodes? I skimmed the article you linked but it doesn't really explain that topic.

We "solved" the issue by heavily limiting what those credentials can do so the impact is limited to "a cluster admin can destroy his own cluster" (which is always the case, regardless of the CSI). But the required communication from all_worker_nodes to the vSphere API is still a burden we'd like to see gone from a security perspective.

Eventually the (hopefully) upcomming full OAuth integration in vSphere 8 might help with the credentials and privilege situation in the future.

omniproc avatar Sep 06 '22 07:09 omniproc

@omniproc According to Tanzu Kubernetes Grid Service Architecture (the schema is not good, but I don't know any better), workload clusters access the API thru the supervisor cluster. This is a bit better, but still the stream is transitively from workload cluster to vSphere API.

Also, this way of working is not open-source, or not documented enough to install on a vanillia cluster.

sathieu avatar Sep 06 '22 07:09 sathieu

@sathieu that's unfortunate and seems like a rather artificial limitation put on the OSS project.

I'm not sure if it's helpful but the source hints for the ImprovedVolumeTopology flag to do the following:

is the feature flag used to make the following improvements to topology feature: avoid taking in VC credentials in node daemonset.

However in my tests enableing this feature gate seems to do way more then just that and the docs are not really clear on what it does exactly. I guess it's linked to the topology aware setup, however normally that would be enabled using those flags, so that's kinda confusing and I didn't have time to play around with it or read the source to better understand the behaviour (my guess is that the args in the manifest are leftovers from previous versions and by now the gate is enabled using this ConfigMap)

The docs about the "topology aware" feature add even more confusion and tell the exact opposite:

By default, the vSphere Cloud Provider Interface and vSphere Container Storage Plug-in pods are scheduled on Kubernetes control plane nodes. For non-topology aware Kubernetes clusters, it is sufficient to provide the credentials of the control plane node to vCenter Server where this cluster is running. For topology-aware clusters, every Kubernetes node must discover its topology by communicating with vCenter Server. This is required to utilize the topology-aware provisioning and late binding feature.

omniproc avatar Sep 06 '22 07:09 omniproc

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Oct 06 '22 08:10 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Nov 05 '22 08:11 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Nov 05 '22 08:11 k8s-ci-robot

/reopen

Actually, this is already possible, I've implemented it in the helm chart https://github.com/vsphere-tmm/helm-charts/pull/50.

Still, I think some official docs is needed, reopening.

sathieu avatar Nov 06 '22 19:11 sathieu

@sathieu: Reopened this issue.

In response to this:

/reopen

Actually, this is already possible, I've implemented it in the helm chart https://github.com/vsphere-tmm/helm-charts/pull/50.

Still, I think some official docs is needed, reopening.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Nov 06 '22 19:11 k8s-ci-robot

/assign @divyenpatel

lipingxue avatar Nov 17 '22 22:11 lipingxue

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Dec 17 '22 23:12 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Dec 17 '22 23:12 k8s-ci-robot