pulumi-kubernetes
pulumi-kubernetes copied to clipboard
Kubeconfig generation
Hello!
- Vote on this issue by adding a 👍 reaction
- If you want to implement this feature, comment to let us know (we'll work with you on design, scheduling, etc.)
Issue details
It's common for users to generate kubeconfig files as part of cluster creation. Today, this is mostly handled in user code (possibly copied from examples), or as part of a ComponentResource like pulumi-eks that generates the kubeconfig in a method. It may be useful to pull this workflow into the provider.
This might be exposed as methods on the Provider class, which currently accepts a kubeconfig parameter, but expects that the value was populated separately.
Affected area/feature
Generating the kubeconfig requires some plugability into the auth mechanism which is a bit cloud provider specific. I am definitely supportive of providing some machinery in the kubernetes provider to support kubeconfig generation but the auth plugin might make this not feasible (unilaterally in the kubernetes provider at least). Certainly having our cloud provider SDKs supporting a kubeconfig generation method seems very high value though.
I have a WIP example to generate a kubeconfig in https://github.com/pulumi/pulumi-kubernetes/tree/lblackstone/kubeconfig-gen
import * as k8s from "@pulumi/kubernetes";
export const kubeconfig = k8s.kubeconfig.eks({
roleArn: "exampleRoleArn",
profileName: "exampleProfile",
}).then(v => v.result);
Outputs:
kubeconfig: (json) {
profileName: "exampleProfile"
roleArn : "exampleRoleArn"
}
I don't understand how we'd avoid baking cloud specifics into the provider. Wouldn't it be preferable that eks provider continue to handle this?
I don't understand how we'd avoid baking cloud specifics into the provider. Wouldn't it be preferable that eks provider continue to handle this?
The idea is that we would support multiple cloud-specific kubeconfigs. We have users that manage EKS clusters without using pulumi-eks, so the proposal is to generalize the kubeconfig generation based on the required inputs for each cloud provider.
I'm not completely sure that pulumi-kubernetes is the right place for that logic, but it would take advantage of the existing code-generation machinery and make this capability available to our existing user base without additional friction.
Today, this is mostly handled in user code (possibly copied from examples), or as part of a ComponentResource like
pulumi-eksthat generates the kubeconfig in a method. It may be useful to pull this workflow into the provider.
@lblackstone I tried using pulumi-eks component, but it uses some old aws-classic package. I wanted to use latest aws provider. Can you share any sample as you mentioned in the comment, preferably YAML runtime?
My example didn't include a complete kubeconfig, unfortunately. It was just focused on plumbing the necessary inputs through.
That said, you can look at https://github.com/pulumi/pulumi-eks/blob/v2.2.1/nodejs/eks/cluster.ts#L199-L261 from the pulumi-eks library to see how we're generating the kubeconfig. You should be able to do something similar with interpolation in YAML using pulumi-aws directly.
@lblackstone I tried the literal YAML content in the kubeconfig. But getting error as follows.
k8sProvider:
type: pulumi:providers:kubernetes
properties: # The arguments to resource properties.
kubeconfig: |
apiVersion: v1
clusters:
- cluster:
server:...............................
Error
pulumi:providers:kubernetes k8sProvider error: rpc error: code = Unknown desc = failed to parse kubeconfig: json: cannot unmarshal string into Go struct field ExecConfig.users.user.exec.args of type []string
This is the exact literal I tried. Thought of testing with real values first and then if works, change it to interpolate. But this is not working.
k8sProvider:
type: pulumi:providers:kubernetes
properties: # The arguments to resource properties.
kubeconfig: |
apiVersion: v1
clusters:
- cluster:
server: <server-url>
certificate-authority-data: <cert-data>
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "pl-native"
- "-r"
- "<role-arn>"
https://github.com/pulumi/pulumi-kubernetes-operator/issues/557