pulumi-eks icon indicating copy to clipboard operation
pulumi-eks copied to clipboard

eks cluster creation fails with `Unhandled exception: TypeError: Cannot read properties of undefined (reading 'data')`

Open mohitreddy1996 opened this issue 2 years ago • 24 comments

Hello!

  • Vote on this issue by adding a 👍 reaction
  • To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already)

Issue details

We use Amazon EKS pulumi component (using typescript) to create EKS clusters. the configuration looks more or less as follows -

const cluster = new eks.Cluster(`eks-cluster`, {
        // vpcId,
        endpointPrivateAccess: true,
        endpointPublicAccess: true,
        subnetIds: // subnet ids,
        nodeGroupOptions: {
            // nodetype, desired capacity etc
            nodeAssociatePublicIpAddress: false,
        },
        providerCredentialOpts: {
            roleArn,
        },
        nodeAssociatePublicIpAddress: false,
        createOidcProvider: true
    }, { provider: awsProvider });

the resources plugin versions

  • aws - 4.38.1
  • eks - 0.37.1

We have this setup for a few months now and recently started seeing this error.

Error - Unhandled exception: TypeError: Cannot read properties of undefined (reading 'data')

I am assuming this is being raised from one of these blocks - https://github.com/pulumi/pulumi-eks/blob/master/nodejs/eks/cluster.ts#L563 (we do see log Cluster is ready which could be from here - https://github.com/pulumi/pulumi-eks/blob/master/nodejs/eks/cluster.ts#L544)

I wonder if there is a version mismatch or any new parameters which need to be set on Cluster definition.

mohitreddy1996 avatar Apr 01 '22 04:04 mohitreddy1996

Hi @mohitreddy1996 - would it be possible for you to use v5.1.1, and report if the issue persists?

guineveresaenger avatar Apr 07 '22 00:04 guineveresaenger

Have a similar error. Tried updating to latest including v5.1.1 and still get the same result. Below is the diagnostics...

Diagnostics:
pulumi:pulumi:Stack (test):
error: Running program '/builds/test/packages/inf/src' failed with an unhandled exception:
TypeError: Cannot read properties of undefined (reading 'data')
at /builds/test/node_modules/@pulumi/cluster.ts:570:103
at /builds/test/node_modules/@pulumi/output.ts:383:31
at Generator.next ()
at /builds/test/node_modules/@pulumi/pulumi/output.js:21:71
at new Promise ()
at __awaiter (/builds/test/node_modules/@pulumi/pulumi/output.js:17:12)
at applyHelperAsync (/builds/test/node_modules/@pulumi/pulumi/output.js:229:12)
at /builds/test/node_modules/@pulumi/output.ts:302:65
at runMicrotasks ()
at processTicksAndRejections (node:internal/process/task_queues:96:5)

re-thc avatar Apr 07 '22 02:04 re-thc

Hi folks - we apologize for the churn on EKS. We believe that https://github.com/pulumi/pulumi-eks/pull/675 will fix the issue going forward.

guineveresaenger avatar Apr 08 '22 18:04 guineveresaenger

Are there any workarounds or known working versions? Have since tried to revert back to previous packages but it no longer works and is always stuck with the same error.

re-thc avatar Apr 11 '22 08:04 re-thc

I get this error when setting Cluster(version='1.24') but not 1.22 or 1.23. I'm using:

$ pulumi about
CLI
Version      3.46.1
Go Version   go1.19.2
Go Compiler  gc

Plugins
NAME        VERSION
aws         5.7.2
eks         0.42.7
kubernetes  3.23.1
python      3.10.8

Host
OS       ubuntu
Version  20.04
Arch     x86_64

(I'm stuck on an older version of Pulumi because of proto 3 v 4 issues, not all of my dependencies are proto4 compatible yet.)

markfickett avatar Jan 13 '23 14:01 markfickett

I ran into this again working on another new cluster w/ up-to-date versions of everything, here's the pyproject.toml:

dependencies = [
    "pulumi==3.55.0",
    "pulumi-aws==5.30.0",
    # node v19.4.0 and awscli 2.10.3 installed separately for pulumi-eks
    "pulumi-eks==1.0.1",
    "pulumi-honeycomb==0.0.14",
]

I found that I had to make this change to get around the error:

     cluster = eks.Cluster( ... )
     eks.ManagedNodeGroup(
         f"{EKS_CLUSTER_NAME}-managed-node-group",
         node_group_name=f"{EKS_CLUSTER_NAME}-managed-node-group",
-        cluster=cluster.core,
+        cluster=cluster,
         version=_K8S_VERSION,
         subnet_ids=_CLUSTER_SUBNETS,
         node_role=node_role,
         ...

markfickett avatar Mar 01 '23 19:03 markfickett

Hi, I also have the issue. I don't create the cluster the simplest way. It's with k8s 1.23, managed node groups, oidc provider, instance roles

What happen is this (here it is during a preview with an existing cluster, but I think it can also happen at creation time an/or during an update):

image

clusterCertificateAuthority is undefined.

The issue happens randomly.

Any idea what could be the cause?

unludo avatar Mar 13 '23 17:03 unludo

Has anyone found a solution yet?

bsod90 avatar May 04 '23 17:05 bsod90

Any ideas on a work around? I have the same issue with updating an existing cluster.

CLI          
Version      3.74.0
Go Version   go1.20.5
Go Compiler  gc

Plugins
NAME        VERSION
aws         5.41.0
aws         5.31.0
awsx        1.0.2
docker      3.6.1
eks         1.0.2
kubernetes  3.30.1
nodejs      unknown

Host     
OS       debian
Version  bookworm/sid
Arch     x86_64

This project is written in nodejs: executable='/bin/node' version='v18.16.1'

Current Stack: superb/devops/shared

Found no pending operations associated with stack

Backend        
Name           pulumi.com
URL            https://app.pulumi.com/xx
User           xx
Organizations  xx

Dependencies:
NAME                VERSION
@pulumi/pulumi      3.74.0
@types/node         14.18.53
@pulumi/aws         5.41.0
@pulumi/awsx        1.0.2
@pulumi/eks         1.0.2
@pulumi/kubernetes  3.30.1

TapTap21 avatar Jul 11 '23 08:07 TapTap21

@guineveresaenger Is there any movement on this issue? I tried getting help in the community slack to no avail. This completely bricks a stack as there is no known workaround.

TapTap21 avatar Aug 18 '23 10:08 TapTap21

Some extra context given the changes made by @danielrbradley (thanks for the help!)

I failed to reproduce this locally given what I think happened as I don't think it's possible at the moment because I need to create a cluster on a older k8s version without specifying the version. When I created the cluster, I did NOT specify a version and so it remains. I have since updated the version in the EKS console a few times and pulled the changes in using pulumi refresh. After that it broke the stack.

In the broken stack I'm also not sure why the certificate authority is undefined, as I can clearly see it present when I export the pulumi state, as well as it obviously exists with the EKS cluster.

Here's thje relevant part of the state:

{
  "urn": "urn:pulumi:shared::devops::eks:index:Cluster::devopsEksCluster",
  "custom": false,
  "type": "eks:index:Cluster",
  "outputs": {
    "eksCluster": {
      "4dabf18193072939515e22adb298388d": "xxx",
      "id": "superb-eks-devops",
      "packageVersion": "",
      "urn": "urn:pulumi:shared::devops::eks:index:Cluster$aws:eks/cluster:Cluster::devopsEksCluster-eksCluster"
    },
    "kubeconfig": {
      "apiVersion": "v1",
      "clusters": [
        {
          "cluster": {
            "certificate-authority-data": "xxx",
            "server": "https://xxx.gr7.eu-central-1.eks.amazonaws.com/"
          },
          "name": "kubernetes"
        }
      ],
      "contexts": [
        {
          "context": {
            "cluster": "kubernetes",
            "user": "aws"
          },
          "name": "aws"
        }
      ],
      "current-context": "aws",
      "kind": "Config",
      "users": [
        {
          "name": "aws",
          "user": {
            "exec": {
              "apiVersion": "client.authentication.k8s.io/v1beta1",
              "args": [
                "eks",
                "get-token",
                "--cluster-name",
                "superb-eks-devops",
                "--role",
                "arn:aws:iam::xx:role/xx"
              ],
              "command": "aws",
              "env": [
                {
                  "name": "KUBERNETES_EXEC_INFO",
                  "value": "{\"apiVersion\": \"client.authentication.k8s.io/v1beta1\"}"
                }
              ]
            }
          }
        }
      ]
    }
  },
  "parent": "urn:pulumi:shared::devops::pulumi:pulumi:Stack::devops-shared"
}

Is there any info I can add to help?

TapTap21 avatar Aug 22 '23 11:08 TapTap21

Thanks for the extra context @TapTap21

The root of the issue seems to be in the AWS provider - it would appear that the certificateAuthority property is sometimes coming through as undefined even though it should always have a value.

There's a suspiciously relevant issue that was closed a while back:

  • https://github.com/pulumi/pulumi-aws/issues/1892

This was addressed by adding the following patch on the upstream implementation:

  • https://github.com/pulumi/pulumi-aws/blob/9b14ba69d12fd30ad570510b420ee4edfeefe7bb/upstream-patches/0009-Add-EKS-cluster-certificate_authorities-plural.patch#L18C2-L49C92

Though this was report to have been resolved as of AWS version 5.1.2 there may be some kind of combination of actions which still triggers it.

Two things which could help resolve the root cause here are:

  1. A reliable reproduction which demonstrates the issue every time.
  2. An example of the state of the Cluster from the AWS provider (rather than just the EKS Cluster component) to observe when the certificateAuthority property becomes unset.

danielrbradley avatar Aug 22 '23 13:08 danielrbradley

Also, to note, the PR #903 will likely resolve the immediate error being thrown, however, it will likely result in a kubeconfig which is incomplete due to not having the required clusters.cluster.certificate-authority-data property set.

Therefore I'm leaving this issue open to continue to investigate the root cause of the missing property.

danielrbradley avatar Aug 22 '23 13:08 danielrbradley

Thanks @danielrbradley, I'll keep trying to reproduce this issue, but it is fairly costly as you explained, creating and destroying EKS clusters take quite some time.

Regarding the second point, any idea what strings I can search for in the state file to find more relevant info? I searched for certificate and every entry seemed normal. Is there a specific type I should be looking for?

TapTap21 avatar Aug 22 '23 14:08 TapTap21

@TapTap21 the type you're looking for is aws:eks/cluster:Cluster. It should have its parent set to the compoent you shared above (urn:pulumi:shared::devops::eks:index:Cluster::devopsEksCluster).

Within the AWS provider we're just calling the AWS SDK DescribeClusterWithContext method which should be equivilent to calling aws eks describe-cluster via the AWS cli. So it should have equivalent field values in the state as compares to that command's output.

Side Note

One other variable to consider here is around AWS versions which are involved. When reproducing this issue in NodeJS, we do pin the AWS dependency (currently to 5.31.0) but it can be overridden in your own package.json. In other langages we bundle the whole AWS dependency within the pulumi-eks provider binary - so this should be more consistent to reproduce.

danielrbradley avatar Aug 22 '23 14:08 danielrbradley

I've just released v1.0.3 which should now avoid the error being thrown, but will likely have an incomplete kubeconfig due to the missing certificate-authority-data.

Our next priority here is to determine the specific set of conditions under which the certificate authority property becomes undefined.

danielrbradley avatar Aug 23 '23 09:08 danielrbradley

@danielrbradley Here is the aws:eks/cluster:Cluster with some obfuscation

{
  "urn": "urn:pulumi:shared::devops::eks:index:Cluster$aws:eks/cluster:Cluster::devopsEksCluster-eksCluster",
  "custom": true,
  "id": "superb-eks-devops",
  "type": "aws:eks/cluster:Cluster",
  "inputs": {
    "__defaults": [],
    "encryptionConfig": {
      "__defaults": [],
      "provider": {
        "__defaults": [],
        "keyArn": "arn:aws:kms:eu-central-1:xxx:key/6343021b-5262-46bc-9b58-dd471aa66b36"
      },
      "resources": [
        "secrets"
      ]
    },
    "kubernetesNetworkConfig": {
      "__defaults": [],
      "serviceIpv4Cidr": "172.20.0.0/16"
    },
    "name": "superb-eks-devops",
    "roleArn": "arn:aws:iam::xxx:role/devopsEksCluster-eksRole-role-fc3bd30",
    "tags": {
      "Name": "devopsEksCluster-eksCluster",
      "__defaults": []
    },
    "vpcConfig": {
      "__defaults": [],
      "endpointPrivateAccess": true,
      "endpointPublicAccess": true,
      "securityGroupIds": [
        "sg-07ba67a54cc36b042"
      ],
      "subnetIds": [
        "subnet-01db0f47b326d5009",
        "subnet-0553e42f7786b7685",
        "subnet-081fa9c2db9007b97",
        "subnet-056870ef50edd9319"
      ]
    }
  },
  "outputs": {
    "__meta": "{\"e2bfb730-ecaa-11e6-8f88-34363bc7c4c0\":{\"create\":1800000000000,\"delete\":900000000000,\"update\":3600000000000}}",
    "arn": "arn:aws:eks:eu-central-1:xxx:cluster/superb-eks-devops",
    "certificateAuthorities": [
      {
        "data": "xxx==" // note this is the same value as certificateAuthority.data
      }
    ],
    "certificateAuthority": {
      "data": "xxx==" // note this is the same value as certificateAuthorities.data
    },
    "createdAt": "2022-07-13 11:35:13.984 +0000 UTC",
    "defaultAddonsToRemoves": [],
    "enabledClusterLogTypes": [],
    "encryptionConfig": {
      "provider": {
        "keyArn": "arn:aws:kms:eu-central-1:xxx:key/6343021b-5262-46bc-9b58-dd471aa66b36"
      },
      "resources": [
        "secrets"
      ]
    },
    "endpoint": "https://xxx.gr7.eu-central-1.eks.amazonaws.com",
    "id": "superb-eks-devops",
    "identities": [
      {
        "oidcs": [
          {
            "issuer": "https://oidc.eks.eu-central-1.amazonaws.com/id/xxx"
          }
        ]
      }
    ],
    "kubernetesNetworkConfig": {
      "ipFamily": "ipv4",
      "serviceIpv4Cidr": "172.20.0.0/16",
      "serviceIpv6Cidr": ""
    },
    "name": "superb-eks-devops",
    "outpostConfig": null,
    "platformVersion": "eks.4",
    "roleArn": "arn:aws:iam::xxx:role/devopsEksCluster-eksRole-role-fc3bd30",
    "status": "ACTIVE",
    "tags": {
      "Name": "devopsEksCluster-eksCluster"
    },
    "tagsAll": {
      "Name": "devopsEksCluster-eksCluster"
    },
    "version": "1.27",
    "vpcConfig": {
      "clusterSecurityGroupId": "sg-0cc0afde5f486904d",
      "endpointPrivateAccess": true,
      "endpointPublicAccess": true,
      "publicAccessCidrs": [
        "0.0.0.0/0"
      ],
      "securityGroupIds": [
        "sg-07ba67a54cc36b042"
      ],
      "subnetIds": [
        "subnet-01db0f47b326d5009",
        "subnet-0553e42f7786b7685",
        "subnet-081fa9c2db9007b97",
        "subnet-056870ef50edd9319"
      ],
      "vpcId": "vpc-073ed2c466925d214"
    }
  },
  "parent": "urn:pulumi:shared::devops::eks:index:Cluster::devopsEksCluster",
  "dependencies": [
    "urn:pulumi:shared::devops::aws:ec2/subnet:Subnet::subnet-private-1a",
    "urn:pulumi:shared::devops::aws:ec2/subnet:Subnet::subnet-private-1b",
    "urn:pulumi:shared::devops::aws:ec2/subnet:Subnet::subnet-public-1a",
    "urn:pulumi:shared::devops::aws:ec2/subnet:Subnet::subnet-public-1b",
    "urn:pulumi:shared::devops::aws:kms/key:Key::devopsEksKms",
    "urn:pulumi:shared::devops::eks:index:Cluster$aws:ec2/securityGroup:SecurityGroup::devopsEksCluster-eksClusterSecurityGroup",
    "urn:pulumi:shared::devops::eks:index:Cluster$eks:index:ServiceRole$aws:iam/role:Role::devopsEksCluster-eksRole-role",
    "urn:pulumi:shared::devops::eks:index:Cluster$eks:index:ServiceRole$aws:iam/rolePolicyAttachment:RolePolicyAttachment::devopsEksCluster-eksRole-4b490823"
  ],
  "provider": "urn:pulumi:shared::devops::pulumi:providers:aws::devopsAccount::3a5b59d3-410a-4c71-af40-4cf1b2c10a8e",
  "propertyDependencies": {
    "encryptionConfig": [
      "urn:pulumi:shared::devops::aws:kms/key:Key::devopsEksKms"
    ],
    "kubernetesNetworkConfig": null,
    "name": null,
    "roleArn": [
      "urn:pulumi:shared::devops::eks:index:Cluster$eks:index:ServiceRole$aws:iam/role:Role::devopsEksCluster-eksRole-role",
      "urn:pulumi:shared::devops::eks:index:Cluster$eks:index:ServiceRole$aws:iam/rolePolicyAttachment:RolePolicyAttachment::devopsEksCluster-eksRole-4b490823"
    ],
    "tags": null,
    "vpcConfig": [
      "urn:pulumi:shared::devops::eks:index:Cluster$aws:ec2/securityGroup:SecurityGroup::devopsEksCluster-eksClusterSecurityGroup",
      "urn:pulumi:shared::devops::aws:ec2/subnet:Subnet::subnet-private-1a",
      "urn:pulumi:shared::devops::aws:ec2/subnet:Subnet::subnet-private-1b",
      "urn:pulumi:shared::devops::aws:ec2/subnet:Subnet::subnet-public-1a",
      "urn:pulumi:shared::devops::aws:ec2/subnet:Subnet::subnet-public-1b"
    ]
  },
  "modified": "2023-08-23T15:13:47.919417052Z"
},

And here is the output of the cli command

{
  "cluster": {
    "name": "superb-eks-devops",
    "arn": "arn:aws:eks:eu-central-1:xxx:cluster/superb-eks-devops",
    "createdAt": "2022-07-13T13:35:13.984000+02:00",
    "version": "1.27",
    "endpoint": "https://xxx.gr7.eu-central-1.eks.amazonaws.com",
    "roleArn": "arn:aws:iam::xxx:role/devopsEksCluster-eksRole-role-fc3bd30",
    "resourcesVpcConfig": {
      "subnetIds": [
        "subnet-01db0f47b326d5009",
        "subnet-0553e42f7786b7685",
        "subnet-081fa9c2db9007b97",
        "subnet-056870ef50edd9319"
      ],
      "securityGroupIds": [
        "sg-07ba67a54cc36b042"
      ],
      "clusterSecurityGroupId": "sg-0cc0afde5f486904d",
      "vpcId": "vpc-073ed2c466925d214",
      "endpointPublicAccess": true,
      "endpointPrivateAccess": true,
      "publicAccessCidrs": [
        "0.0.0.0/0"
      ]
    },
    "kubernetesNetworkConfig": {
      "serviceIpv4Cidr": "172.20.0.0/16",
      "ipFamily": "ipv4"
    },
    "logging": {
      "clusterLogging": [
        {
          "types": [
            "api",
            "audit",
            "authenticator",
            "controllerManager",
            "scheduler"
          ],
          "enabled": false
        }
      ]
    },
    "identity": {
      "oidc": {
        "issuer": "https://oidc.eks.eu-central-1.amazonaws.com/id/xxx"
      }
    },
    "status": "ACTIVE",
    "certificateAuthority": {
      "data": "xxx==" // Note, values is consistent with the value in the pulumi state file
    },
    "platformVersion": "eks.4",
    "tags": {
      "Name": "devopsEksCluster-eksCluster"
    },
    "encryptionConfig": [
      {
        "resources": [
          "secrets"
        ],
        "provider": {
          "keyArn": "arn:aws:kms:eu-central-1:xxx:key/6343021b-5262-46bc-9b58-dd471aa66b36"
        }
      }
    ]
  }
}

The new version

I've just released v1.0.3 which should now avoid the error being thrown, but will likely have an incomplete kubeconfig due to the missing certificate-authority-data.

Our next priority here is to determine the specific set of conditions under which the certificate authority property becomes undefined.

I've updated to v1.0.3 and tried to run pulumi up, which produced this error:


     Type                                Name                         Plan        Info
     pulumi:pulumi:Stack                 devops-shared                            1 error
 ~   ├─ pulumi:providers:aws             devopsAccount                update      [diff: ~version]
 ~   ├─ pulumi:providers:aws             devAccount                   update      [diff: ~version]
     ├─ eks:index:Cluster                devopsEksCluster                         
 ~   │  ├─ pulumi:providers:kubernetes   devopsEksCluster-provider    update      [diff: ~version]
 ~   │  ├─ pulumi:providers:kubernetes   devopsEksCluster-eks-k8s     update      [diff: ~version]
 +-  │  └─ kubernetes:core/v1:ConfigMap  devopsEksCluster-nodeAccess  replace     [diff: ~data]
 ~   └─ pulumi:providers:kubernetes      SSAdevopsEksProvider         update      [diff: ~version]


Diagnostics:
  pulumi:pulumi:Stack (devops-shared):
    error: Running program '/home/wernich/WebstormProjects/infrastructure/infrastructure/devops/shared/' failed with an unhandled exception:
    <ref *1> Error: failed to register new resource devopsEksProvider [pulumi:providers:kubernetes]: 3 INVALID_ARGUMENT: invalid alias URN: invalid URN "urn:pulumi:shared::devops::eks:index:Cluster$pulumi:providers:kubernetes::devopsEksCluster-provider::bc51e086-2865-45ab-bc15-8eb139846b14"
        at Object.registerResource (/home/wernich/WebstormProjects/infrastructure/infrastructure/devops/shared/node_modules/@pulumi/runtime/resource.ts:421:27)
        at new Resource (/home/wernich/WebstormProjects/infrastructure/infrastructure/devops/shared/node_modules/@pulumi/resource.ts:490:13)
        at new CustomResource (/home/wernich/WebstormProjects/infrastructure/infrastructure/devops/shared/node_modules/@pulumi/resource.ts:880:9)
        at new ProviderResource (/home/wernich/WebstormProjects/infrastructure/infrastructure/devops/shared/node_modules/@pulumi/resource.ts:923:9)
        at new Provider (/home/wernich/WebstormProjects/infrastructure/infrastructure/devops/shared/node_modules/@pulumi/provider.ts:56:9)
        at Object.<anonymous> (/home/wernich/WebstormProjects/infrastructure/infrastructure/devops/shared/k8s/provider/index.ts:5:34)
        at Module._compile (node:internal/modules/cjs/loader:1256:14)
        at Module.m._compile (/home/wernich/WebstormProjects/infrastructure/infrastructure/devops/shared/node_modules/ts-node/src/index.ts:439:23)
        at Module._extensions..js (node:internal/modules/cjs/loader:1310:10)
        at Object.require.extensions.<computed> [as .ts] (/home/wernich/WebstormProjects/infrastructure/infrastructure/devops/shared/node_modules/ts-node/src/index.ts:442:12) {
      promise: Promise { <rejected> [Circular *1] }

Digging a bit deeper into this, it doesn't make that much sense. I can see that my k8s resources reference the provider urn:pulumi:shared::devops::eks:index:Cluster$pulumi:providers:kubernetes::devopsEksCluster-provider::bc51e086-2865-45ab-bc15-8eb139846b14, but when looking at the URN of the provider that EKS creates it doesn't match the URN (Note the CA details were correct in the obfuscated kubeconfig):

{
  "urn": "urn:pulumi:shared::devops::eks:index:Cluster$pulumi:providers:kubernetes::devopsEksCluster-eks-k8s",
  "custom": true,
  "id": "998ffbc6-abf9-4d2e-99ae-fe69c83e5110",
  "type": "pulumi:providers:kubernetes",
  "inputs": {
    "kubeconfig": "xxx",
    "version": "3.24.1"
  },
  "outputs": {
    "kubeconfig": "xxx",
    "version": "3.24.1"
  },
  "parent": "urn:pulumi:shared::devops::eks:index:Cluster::devopsEksCluster",
  "dependencies": [
    "urn:pulumi:shared::devops::eks:index:Cluster$aws:eks/cluster:Cluster::devopsEksCluster-eksCluster"
  ],
  "propertyDependencies": {
    "kubeconfig": [
      "urn:pulumi:shared::devops::eks:index:Cluster$aws:eks/cluster:Cluster::devopsEksCluster-eksCluster"
    ]
  }
}

Perhaps something strange happened with aliases that lead to the provider being replaced, but the resources that use it didn't update the URNs of the provider they rely on?

TapTap21 avatar Aug 23 '23 15:08 TapTap21

I tried to add as much info as I can to help solve this

TapTap21 avatar Aug 23 '23 15:08 TapTap21

Ok, so this looks like the property is correctly populated in the state but when being read by the EKS component is coming through as undefined.

@justinvp have you seen any similar issues with component providers?

danielrbradley avatar Aug 25 '23 15:08 danielrbradley

@danielrbradley Good day, is there any movement on this particular issue?

TapTap21 avatar Sep 18 '23 07:09 TapTap21

@TapTap21 I have been looking into this today. If I look back at your comment from Aug 23, section The new version, I noticed that the refresh error gives an invalid URN for the provider named devopsEksCluster-provider, but you then post the state output for provider devopsEksCluster-eks-k8s.

You might have overlooked this, but we create 2 Kubernetes provider objects within the EKS component, of which one we export for downstream usage. Here is the snippet from your output:

 ~   │  ├─ pulumi:providers:kubernetes   devopsEksCluster-provider    update      [diff: ~version]
 ~   │  ├─ pulumi:providers:kubernetes   devopsEksCluster-eks-k8s     update      [diff: ~version]

Then again, I still wasn't able to reproduce the problem so far.

ringods avatar Oct 30 '23 15:10 ringods