community icon indicating copy to clipboard operation
community copied to clipboard

FieldExport is not populating the ConfigMap

Open nabeelaccount opened this issue 1 year ago • 5 comments

Describe the bug FieldExport is not populating the ConfigMap. I have checked the paths and they look correct. I have also noticed this message from the ack logs: "msg":"desired resource state has changed"

which is backed by this message from the dbinstance: conditions:

  • lastTransitionTime: "2024-08-13T22:12:23Z" status: "False" type: ACK.ResourceSynced

Steps to reproduce

---
apiVersion: rds.services.k8s.aws/v1alpha1
kind: DBInstance
metadata:
  name: "postgres-db"
spec:
  allocatedStorage: 20
  dbInstanceClass: db.t4g.micro
  dbInstanceIdentifier: "postgres-db"
  engine: postgres
  engineVersion: "14"
  masterUsername: "postgres"
  masterUserPassword:
    namespace: default
    name: "postgres-password"
    key: password

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: postgres-db-conn-cm
data: {}
---
apiVersion: services.k8s.aws/v1alpha1
kind: FieldExport
metadata:
  name: postgres-db-host
spec:
  to:
    name: postgres-db-conn-cm
    kind: configmap
  from:
    path: ".status.endpoint.address"
    resource:
      group: rds.services.k8s.aws
      kind: DBInstance
      name: postgres-db
---
apiVersion: services.k8s.aws/v1alpha1
kind: FieldExport
metadata:
  name: postgres-db-port
spec:
  to:
    name: postgres-db-conn-cm
    kind: configmap
  from:
    path: ".status.endpoint.port"
    resource:
      group: rds.services.k8s.aws
      kind: DBInstance
      name: postgres-db
---
apiVersion: services.k8s.aws/v1alpha1
kind: FieldExport
metadata:
  name: postgres-db-user
spec:
  to:
    name: postgres-db-conn-cm
    kind: configmap
  from:
    path: ".spec.masterUsername"
    resource:
      group: rds.services.k8s.aws
      kind: DBInstance
      name: postgres-db

Expected outcome I expect the FieldExport to populate the configmap with the correct values from the specified keys/path.

Environment prod

Logs {"level":"info","ts":"2024-08-13T22:12:54.593Z","logger":"ackrt","msg":"updated resource","kind":"DBInstance","namespace":"default","name":"postgres-db","account":"123456789","role":"","region":"eu-west-2","is_adopted":false,"generation":3} {"level":"info","ts":"2024-08-13T22:13:24.840Z","logger":"ackrt","msg":"desired resource state has changed","kind":"DBInstance","namespace":"default","name":"postgres-db","account":"123456789","role":"","region":"eu-west-2","is_adopted":false,"generation":3,"diff":[{"Path":{"Parts":["Spec","CACertificateIdentifier"]},"A":null,"B":"rds-ca-rsa2048-g1"},{"Path":{"Parts":["Spec","StorageThroughput"]},"A":null,"B":0}]} {"level":"info","ts":"2024-08-13T22:13:25.729Z","logger":"ackrt","msg":"updated resource","kind":"DBInstance","namespace":"default","name":"postgres-db","account":"123456789","role":"","region":"eu-west-2","is_adopted":false,"generation":3} {"level":"info","ts":"2024-08-13T22:13:55.984Z","logger":"ackrt","msg":"desired resource state has changed","kind":"DBInstance","namespace":"default","name":"postgres-db","account":"123456789","role":"","region":"eu-west-2","is_adopted":false,"generation":3,"diff":[{"Path":{"Parts":["Spec","CACertificateIdentifier"]},"A":null,"B":"rds-ca-rsa2048-g1"},{"Path":{"Parts":["Spec","StorageThroughput"]},"A":null,"B":0}]} {"level":"info","ts":"2024-08-13T22:13:57.023Z","logger":"ackrt","msg":"updated resource","kind":"DBInstance","namespace":"default","name":"postgres-db","account":"123456789","role":"","region":"eu-west-2","is_adopted":false,"generation":3} {"level":"info","ts":"2024-08-13T22:14:27.289Z","logger":"ackrt","msg":"desired resource state has changed","kind":"DBInstance","namespace":"default","name":"postgres-db","account":"123456789","role":"","region":"eu-west-2","is_adopted":false,"generation":3,"diff":[{"Path":{"Parts":["Spec","CACertificateIdentifier"]},"A":null,"B":"rds-ca-rsa2048-g1"},{"Path":{"Parts":["Spec","StorageThroughput"]},"A":null,"B":0}]} {"level":"info","ts":"2024-08-13T22:14:28.244Z","logger":"ackrt","msg":"updated resource","kind":"DBInstance","namespace":"default","name":"postgres-db","account":"123456789","role":"","region":"eu-west-2","is_adopted":false,"generation":3}

  • Kubernetes version:
    Client Version: v1.30.3 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.30.3-eks-2f46c53
  • Using EKS (yes/no), if so version?: Yes. 1.30
  • AWS service targeted (S3, RDS, etc.) RDS

nabeelaccount avatar Aug 13 '24 22:08 nabeelaccount

This is how the ACK was created:

resource "helm_release" "ack_rds" {
  name             = "ack-rds"
  namespace        = "kube-system"
  repository       = "oci://public.ecr.aws/aws-controllers-k8s"
  chart            = "rds-chart"
  version          = "1.4.3"
  create_namespace = false
  
  set {
    name  = "aws.region"
    value = var.region
  }

  set {
    name  = "serviceAccount.create"
    value = "false"
  }

  set {
    name  = "serviceAccount.name"
    value = kubernetes_service_account.ack_rds.metadata[0].name
  }
}

nabeelaccount avatar Aug 14 '24 07:08 nabeelaccount

I think I'm seeing a similar error; while this worked perfectly in my prd cluster, I'm now setting up the identical source on my dev cluster and having the same issues. I've tried deleting and recreating the field exports and ConfigMap but no improvement:

DBInstance manifests
---
apiVersion: kms.services.k8s.aws/v1alpha1
kind: Key
metadata:
  name: odoo-v13-rds
spec:
  description: Encryption key for odoo RDS instance
---
apiVersion: ec2.services.k8s.aws/v1alpha1
kind: SecurityGroup
metadata:
  name: odoo-v13-db-sg
spec:
  description: Allow access to odoo RDS instance
  ingressRules:
    - fromPort: 5432
      ipProtocol: tcp
      ipRanges:
        - cidrIP: 10.0.0.0/8
          description: Internal traffic
      toPort: 5432
  name: oo-v13-db-$(ENVIRONMENT)-sg
  vpcID: $(VPC_ID)
---
apiVersion: rds.services.k8s.aws/v1alpha1
kind: DBInstance
metadata:
  name: odoo-v13-rds
spec:
  allocatedStorage: 100
  dbInstanceClass: db.r6g.8xlarge
  dbInstanceIdentifier: $(ENVIRONMENT)-ims
  engine: postgres
  engineVersion: '16'
  dbSnapshotIdentifier: arn:aws:rds:us-west-2:208266463175:snapshot:production-ims-2024-12-12-09-11
  # dbName: imsv2
  masterUsername: odooclient
  masterUserPassword:
    name: odoo-v13-eaze-service-db-password
    key: value
  autoMinorVersionUpgrade: true
  backupRetentionPeriod: 7
  copyTagsToSnapshot: true
  dbSubnetGroupName: eaze-$(ENVIRONMENT)
  deletionProtection: false
  enableCloudwatchLogsExports: [postgresql, upgrade]
  enableIAMDatabaseAuthentication: true
  kmsKeyRef:
    from:
      name: odoo-v13-rds
  multiAZ: true
  performanceInsightsEnabled: true
  performanceInsightsRetentionPeriod: 7
  publiclyAccessible: false
  storageEncrypted: true
  storageType: gp3
  vpcSecurityGroupRefs:
    - from:
        name: odoo-v13-db-sg
ConfigMap/FieldExport manifests
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: odoo-v13-rds-values
data: {}
---
apiVersion: services.k8s.aws/v1alpha1
kind: FieldExport
metadata:
  name: odoo-v13-rds-host
spec:
  to:
    name: odoo-v13-rds-values
    kind: configmap
  from:
    path: .status.endpoint.address
    resource:
      group: rds.services.k8s.aws
      kind: DBInstance
      name: odoo-v13-rds
---
apiVersion: services.k8s.aws/v1alpha1
kind: FieldExport
metadata:
  name: odoo-v13-rds-port
spec:
  to:
    name: odoo-v13-rds-values
    kind: configmap
  from:
    path: .status.endpoint.port
    resource:
      group: rds.services.k8s.aws
      kind: DBInstance
      name: odoo-v13-rds
---
apiVersion: services.k8s.aws/v1alpha1
kind: FieldExport
metadata:
  name: odoo-v13-rds-user
spec:
  to:
    name: odoo-v13-rds-values
    kind: configmap
  from:
    path: .spec.masterUsername
    resource:
      group: rds.services.k8s.aws
      kind: DBInstance
      name: odoo-v13-rds

philchristensen avatar Feb 06 '25 15:02 philchristensen

Issues go stale after 180d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 60d of inactivity and eventually close. If this issue is safe to close now please do so with /close. Provide feedback via https://github.com/aws-controllers-k8s/community. /lifecycle stale

ack-bot avatar Aug 05 '25 15:08 ack-bot

/remove-lifecycle stale

FernandoMiguel avatar Aug 08 '25 09:08 FernandoMiguel

Are there any plans to address this bug? We are stuck on version 1.4.22 of the controller until this is fixed.

johnjeffers avatar Nov 04 '25 20:11 johnjeffers

Does anyone have a workaround for this or is there any progress? It's kind of a dealbreaker for using a ACK.

alon-apono avatar Dec 18 '25 19:12 alon-apono

Hi @nabeelaccount we had a similar issue in our EKS cluster, which got resolved by simply restarting the ACK service.

Did you deploy any other ACK controller except RDS? The logs of the error appeared actually in another unrelated service (I think it was kms).


{"level":"info","ts":"2025-11-14T12:28:54.377Z","msg":"pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:251: failed to list *v1alpha1.FieldExport: the server could not find the requested resource (get fieldexports.services.k8s.aws)"}
{"level":"error","ts":"2025-11-14T12:28:54.377Z","msg":"Unhandled Error","logger":"UnhandledError","error":"pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:251: Failed to watch *v1alpha1.FieldExport: failed to list *v1alpha1.FieldExport: the server could not find the requested resource (get fieldexports.services.k8s.aws)","stacktrace":"k8s.io/client-go/tools/cache.DefaultWatchErrorHandler\n\t/go/pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:166\nk8s.io/client-go/tools/cache.(*Reflector).Run.func1\n\t/go/pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:316\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/backoff.go:226\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/backoff.go:227\nk8s.io/client-go/tools/cache.(*Reflector).Run\n\t/go/pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:314\nk8s.io/client-go/tools/cache.(*controller).Run.(*Group).StartWithChannel.func2\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:55\nk8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:72"}

A simple redeployment of the ACK controllers solved it for us.

mateocolina avatar Dec 18 '25 19:12 mateocolina