community
community copied to clipboard
ElastiCache controller error: invalid: status.ackResourceMetadata.region: Required value
Describe the bug We have upgraded the Elasticache Controller Version v0.0.23 and now the existing resources getting below errors
2023-05-17T07:11:49.837Z ERROR Reconciler error {"controller": "replicationgroup", "controllerGroup": "elasticache.services.k8s.aws", "controllerKind": "ReplicationGroup", "ReplicationGroup": {"name":"ndi","namespace":"preview"}, "namespace": "preview", "name": "ndi", "reconcileID": "a43187e4-4af4-4730-955f-cebb4f97f987", "error": "ReplicationGroup.elasticache.services.k8s.aws \"ndi\" is invalid: status.ackResourceMetadata.region: Required value"}
and if we manually update the existing resporce via kubectl edit --subresource=status
we are getting * spec.description: Required value
Steps to reproduce
Upgrade the controller and try the version change
Expected outcome It should able to update the existing resources
Environment
- Kubernetes version 1.24
- Using EKS (yes/no), if so version? Yes
- AWS service targeted (S3, RDS, etc.) Redis
Weirdly I don't see a required annotation on the region field in the Go code: https://github.com/aws-controllers-k8s/runtime/blob/main/apis/core/v1alpha1/resource_metadata.go#L33-L34
But I do see it set to required in the CRD: https://github.com/aws-controllers-k8s/elasticache-controller/blob/main/config/crd/bases/elasticache.services.k8s.aws_replicationgroups.yaml#L431-L433
It's possible that the version of the Elasticache controller you were using previously was particularly old, or faulty, and didn't add the required fields to the status originally. If you create a new ReplicationGroup
resource, with the newest version of the controller, do you see these fields are populated?
Issues go stale after 180d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 60d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale
Stale issues rot after 60d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 60d of inactivity.
If this issue is safe to close now please do so with /close
.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle rotten
Issues go stale after 180d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 60d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale