pulumi-kubernetes icon indicating copy to clipboard operation
pulumi-kubernetes copied to clipboard

Preview fails for Helm Release when trying to get server version from Kubernetes

Open michael-barker opened this issue 2 years ago • 5 comments

What happened?

Performing a preview for a Helm Release fails when upgrading to pulumi-kubernetes 3.21.0. This appears to be a result of the change to automatically fill in .Capabilities in Helm chart, #2155. With version 3.20.x the preview would succeed even when not authenticated with the cluster.

Steps to reproduce

  • EKS cluster provisioned with Pulumi in a separate cluster project.
  • Outputs from cluster project are used to configure the Kubernetes provider.
  • Provision a Helm Release with Pulumi with no AWS profile configured.

Expected Behavior

Preview should succeed as it did in 3.20.0.

Actual Behavior

Preview fails with the following error.

Diagnostics:
  kubernetes:helm.sh/v3:Release (eck):
    error: could not get server version from Kubernetes: the server has asked for the client to provide credentials

Output of pulumi about

CLI          
Version      3.46.1
Go Version   go1.19.2
Go Compiler  gc

Plugins
NAME        VERSION
aws         5.10.0
docker      3.2.0
eks         0.41.2
kubernetes  3.22.1
nodejs      unknown

Host     
OS       ubuntu
Version  22.04
Arch     x86_64

This project is written in nodejs: executable='/home/linuxbrew/.linuxbrew/bin/node' version='v18.11.0'

Current Stack: ---

TYPE                                                        URN
---


Found no pending operations associated with cequence/tenant1-dev

Backend        
Name           pulumi.com
URL            ---
User           ---
Organizations  ---

Dependencies:
NAME                  VERSION
@pulumi/pulumi        3.38.0
@types/node           17.0.41
@cequence/pulumi-k8s  1.87.0
@pulumi/aws           5.10.0
@pulumi/awsx          0.40.0
@pulumi/kubernetes    3.22.1

Pulumi locates its logs in /tmp by default

Additional context

This is useful to do previews off of branches in CI. We're using the Pulumi integration with GitLab to add the previews to MRs. By allowing the preview jobs cluster access this presents some security challenges even with read only access as now someone could read a Kubernetes secret by modifying the Pulumi program on a branch.

Contributing

Vote on this issue by adding a 👍 reaction. To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).

michael-barker avatar Nov 17 '22 03:11 michael-barker

@michael-barker If I understand correctly, it looks like you are supplying a valid kubeconfig that doesn't have authorization to perform actions on the cluster?

I ran a preview locally with a non-existent kubeconfig, and it worked, so it seems like you're getting into a state where the provider thinks that it has cluster access, but then fails due to auth permissions. We do best-effort previews if the cluster is unreachable, but don't explicitly provide a way to operate in a mode with reduced permissions.

KUBECONFIG=dummy pulumi up
Previewing update (dev)

View Live: https://app.pulumi.com/lblackstone/pulumi-k8s-test/dev/previews/21c275f4-2b3f-4545-8f4f-ff83fedd8fa2

     Type                              Name                 Plan       Info
 +   pulumi:pulumi:Stack               pulumi-k8s-test-dev  create     1 message
 +   └─ kubernetes:helm.sh/v3:Release  foo                  create


Diagnostics:
  pulumi:pulumi:Stack (pulumi-k8s-test-dev):
    W1117 15:54:10.836778   23610 loader.go:223] Config not found: dummy


Do you want to perform this update? details
+ pulumi:pulumi:Stack: (create)
    [urn=urn:pulumi:dev::pulumi-k8s-test::pulumi:pulumi:Stack::pulumi-k8s-test-dev]
    + kubernetes:helm.sh/v3:Release: (create)
        [urn=urn:pulumi:dev::pulumi-k8s-test::kubernetes:helm.sh/v3:Release::foo]
        [provider=urn:pulumi:dev::pulumi-k8s-test::pulumi:providers:kubernetes::default_3_22_1::04da6b54-80e4-46f7-96ec-b56ff0331ba9]
        atomic                  : false
        chart                   : "./secret"
        cleanupOnFail           : false
        createNamespace         : false
        dependencyUpdate        : false
        devel                   : false
        disableCRDHooks         : false
        disableOpenapiValidation: false
        disableWebhooks         : false
        forceUpdate             : false
        lint                    : false
        name                    : "foo-cc116bbf"
        recreatePods            : false
        renderSubchartNotes     : false
        replace                 : false
        resetValues             : false
        resourceNames           : {
            Secret/v1: [
                [0]: "mysecret"
            ]
        }
        reuseValues             : false
        skipAwait               : false
        skipCrds                : false
        timeout                 : 300
        verify                  : false
        waitForJobs             : false

lblackstone avatar Nov 17 '22 23:11 lblackstone

+1 I am using python with:

pulumi==3.47.1
pulumi-kubernetes==3.23.1

and I also have this issue. My Kubernetes cluster is based on EKS and sometimes when I do up or preview I get this error but only on the first time. For example after a night on the begining of the work I do pulumi preview and I get this error. Then I do pulumi preview again and there is no error. I think it does this thing before kubernetes provider object gets it's token assuming that provider is already configured.

mszewczyk-ipwt avatar Feb 20 '23 13:02 mszewczyk-ipwt

It seems i run into the same issue with my playground k8s: https://github.com/chubbyts/chubbyts-petstore/tree/master/pulumi

pulumi refresh works fine, but pulumi up leads to the following:

Previewing update (dev)

View in Browser (Ctrl+O): https://app.pulumi.com/dominikzogg/chubbyts-petstore/dev/previews/6e622c1f-d1eb-45f0-89cc-b04e4029e71f

     Type                              Name                   Plan       Info
     pulumi:pulumi:Stack               chubbyts-petstore-dev             
 ~   ├─ pulumi:providers:kubernetes    k8s-provider           update     [diff: ~version]
 ~   ├─ docker:index:Image             node                   update     [diff: ~build]
     └─ kubernetes:helm.sh/v3:Release  helm-metrics-server               1 error


Diagnostics:
  kubernetes:helm.sh/v3:Release (helm-metrics-server):
    error: could not get server version from Kubernetes: the server has asked for the client to provide credentials

Versions:

pulumi version: v3.63.0
k8s: 1.26.3-do.0

dominikzogg avatar Apr 16 '23 18:04 dominikzogg

Regenerate the API token at digitalocean and pulumi config set digitalocean:token mysecrettoken --secret worked in my case to get it up and running again

dominikzogg avatar Apr 16 '23 18:04 dominikzogg

I'm getting this on newly-created DO clusters. Bootstrapping the new cluster, even with --target=${CLUSTER_NAME} results in errors from Pulumi trying to template the Helm chart:

Exception: invoke of kubernetes:helm:template failed: invocation of kubernetes:helm:template returned an error: failed to generate YAML for specified Helm chart: could not get server version from Kubernetes: Get "redacted.k8s.ondigitalocean.com/version?timeout=32s": dial tcp: lookup redacted.k8s.ondigitalocean.com on 127.0.0.53:53: no such host

This makes sense because on templating the helm chart, there is no cluster. Note that the timeout=32s is unaffected by the chart opts timeout setting:

Chart(
        release_name='spotinfo-argocd',
        config=LocalChartOpts(
            path='../../k8s/charts/spotinfo-argocd',
            namespace=namespace.metadata.name,
        ),
        opts=ResourceOptions(
            provider=provider,
            depends_on=[namespace, cluster],
            custom_timeouts=CustomTimeouts(create='30m'),
        ),
    )

reinvantveer avatar Apr 16 '24 11:04 reinvantveer