ACK controller have duplicate CRD causing issues when using kustomize
Is your feature request related to a problem? Yes. To understand our issue, we are using argocd and kustomize, and we have defined an argocd application that is regrouping all ACK controller we want to use. Basically, we use IAM, RDS, S3. We have an issue due to kustomize, when kustomize is building the yaml, it throws an error due to CRDs duplication...
The CRDs concerned are the following: services.k8s.aws_adoptedresources.yaml services.k8s.aws_fieldexports.yaml
Theses CRDs are present in each helm charts, and they are exactly the same..
Step to reproduce our problem Create your kustomize file:
helmCharts:
- repo: oci://public.ecr.aws/aws-controllers-k8s
name: s3-chart
includeCRDs: true
releaseName: s3
version: 1.0.9
- repo: oci://public.ecr.aws/aws-controllers-k8s
name: rds-chart
includeCRDs: true
releaseName: rds
version: 1.1.11
Then apply:
kustomize build --enable-helm .
At this step, kustomize will fail saying that some CRDs are the same, and need to be merge or patch.
Describe the solution you'd like Define this CRDs as helm dependencies could allow us to activate the CRD creation once.
Describe alternatives you've considered Giving more flexibility to choose which CRDs are deployed from the helm values.
Hi!
Interesting thing. I really wonder how I did not notice it, as I am using FluxCD and did not run into this issue. For the more context - I am installing every chart as a separate release.
Also I would like to mention, that I had ANOTHER issue with CRDs, when upgrading Helm charts versions #2007
Issues go stale after 180d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 60d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale
/remove-lifecycle stale