Feat: Support direct import of cluster certificates from other platforms (such as ACK, GKE)
What would you like to be added?
Support direct import of cluster certificates from other platforms (such as ACK, GKE)
Why is this needed?
Currently, only manual upload of cluster certificates is supported, which is not simple and direct for users using the cloud platform.
Current Status:
Karpor currently supports three types of credentials:
- ServiceAccountToken
- X509
- ExecConfig (primarily used for EKS integration, implemented through #564 ).
Cloud Kubernetes provider
Some cloud providers' managed Kubernetes services export kubeconfig files that use local command-line commands to obtain temporary credentials by reading local files or environment variables, such as EKS and GKE.
For this kind clusters, users can create a read-only ClusterRole in the cluster to be imported, and create a corresponding ServiceAccount and a permanent secret(token) for the purpose of importing the cluster.
# example Cluster role
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: karpor-viewer
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- get
- list
- watch
However, this approach has two issues: firstly, it is intrusive to the user's cluster; secondly, each user cluster needs to export a long-term token, which poses security vulnerabilities and maintenance challenges.
Proposal: Cloud Credential Management
Add a new GVK called cloudcredentials to store users' cloud credentials, and then reference these credentials in cluster.karpor.io/v1beta1/cluster.
Below is the user workflow:
Dashboard:
- Create a cloudCredential in the dashboard.
- Upload a kubeconfig, and select to connect to a specific cloudCredential. By doing this, the user field in the kubeconfig will be ignored, and the client will be built using this cloudCredential.
CLI:
- Create a cloudCredential using a YAML file.
- Create a cluster that references the cloudCredential using a YAML file.
cc @elliotxx @fanfan-yu
Hi. I'm interested in what the advantage of using a new crd. If this solution is more secure, it will bring new changes: how to storage this crds.
@ruquanzhao Cool, let me see the details 👍
Hi. I'm interested in what the advantage of using a new
crd. If this solution is more secure, it will bring new changes: how to storage this crds.
Using the same CRD allows users to more easily maintain clusters under the same cloud account.
it will bring new changes: how to storage this crds.
@fanfan-yu You are right. Security is another important point. Currently, anyone with the Karpor admin certificate can access the token/x509 for all clusters. Ideally, these token/x509 cert and crd cloud-credentials should be encrypted and only decrypted at runtime.
Managing multi-cloud Kubernetes clusters seems to be a common challenge, so I'm curious—do you know how other Kubernetes visualization projects handle this?
Yes, I am investigating similar projects, such as Lens, Karmada, etc., and there are no specific results yet; but I think everyone's approach is similar, and it takes time to fill in this part of the functions, and there may be other specific issues created. A little busy on the weekend, I reply a little late,sorry
Cool, Appreciate it, I'm willing to see how this goes and make some contributions if needed.
Great brother! If I have formed content later, I will create an issue, which should be some specific work cc @fanfan-yu can also take a look together
Hi, I am the guy who supports the AWS kubeconfig auth method. Recently, I checked other projects which are trying to connect to multiple clusters. Some of them are using agents to transform data. They deploy a deployment into each cluster. Have you guys thought about this approach?
@CirillaQL Welcome to join the discussion! Long time no see. Yes, installing an agent in each cluster is also a way. Can you share the implementation or document link you have seen? When karpor was first designed, considering the project threshold and not wanting to invade the user cluster, a centralized deployment mode was chosen (all components are deployed together and can be installed with one click through helm). But recently I found that in some large-scale scenarios, this mode will have single-point performance problems, especially karpor-syncer. So I also want to expand the deployment architecture, and deploying an agent in each cluster is a feasible direction.
@elliotxx There is a Kubernetes workload AI controller system called KubeDoor. I believe they are using an agent to accomplish that. Also, as far as I know, Karmada also uses an agent to connect to the control plane. I understand that it is a big change for Karpor. I also read the CRD method you mentioned at the top. If you guys choose a solution to implement it, please let me know.
Good idea! We will implement agent the in high-availability mode, which like the solution in karmada (details is in https://github.com/KusionStack/karpor/tree/feat-karpor-agent). However, it will bring complexity for users, so we will discuss with more comprehensive solution.
If you have better suggestions, welcome to discuss.
@fanfan-yu Thank you! I didn't notice that feature. I will read through the code and discuss it later.