consul-k8s
consul-k8s copied to clipboard
Helm chart erroring when CRDs are created outside of the Helm chart and do not have specific labels and annotations
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you!
- Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request.
- If you are interested in working on this issue or have submitted a pull request, please leave a comment.
Is your feature request related to a problem? Please describe.
We have skip_crds = true
to our helm chart install because we need to patch the consul-k8s CRDs (related to https://github.com/hashicorp/consul-k8s/issues/3520) and we created the CRDs more or less how they come from the consul-k8s
release tarball. Unfortunately we got this error when installing the helm chart after creating the CRDs:
Error: Unable to continue with install: CustomResourceDefinition "exportedservices.consul.hashicorp.com" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "consul"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "consul-system"
Feature Description
We do manage CRDs outside of helm for many charts already and this is the first time we see the helm chart not wanting to skip_crds = true
option charts should respect.
Use Case(s)
Installing CRDs outside of the helm chart.
Contributions
We patch with kustomize like this:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
metadata:
name: consul
commonLabels:
app: consul
app.kubernetes.io/managed-by: Helm
chart: consul-helm
component: crd
heritage: Helm
release: consul
commonAnnotations:
meta.helm.sh/release-name: consul
meta.helm.sh/release-namespace: consul-system
resources:
- consul.hashicorp.com_exportedservices.yaml
- consul.hashicorp.com_ingressgateways.yaml
- consul.hashicorp.com_meshes.yaml
- consul.hashicorp.com_proxydefaults.yaml
- consul.hashicorp.com_servicedefaults.yaml
- consul.hashicorp.com_serviceintentions.yaml
- consul.hashicorp.com_serviceresolvers.yaml
- consul.hashicorp.com_servicerouters.yaml
- consul.hashicorp.com_servicesplitters.yaml
- consul.hashicorp.com_terminatinggateways.yaml
patches:
- target:
kind: CustomResourceDefinition
name: ingressgateways.consul.hashicorp.com
patch: |
- op: add
path: /spec/versions/0/schema/openAPIV3Schema/properties/spec/properties/listeners/items/properties/services/items/required
value:
- name
- namespace
- op: add
path: /spec/versions/0/schema/openAPIV3Schema/properties/spec/properties/listeners/items/properties/services/x-kubernetes-list-map-keys
value:
- name
- namespace
- op: add
path: /spec/versions/0/schema/openAPIV3Schema/properties/spec/properties/listeners/items/properties/services/x-kubernetes-list-type
value: map
- op: add
path: /spec/versions/0/schema/openAPIV3Schema/properties/spec/properties/listeners/items/required
value:
- port
- protocol
- op: add
path: /spec/versions/0/schema/openAPIV3Schema/properties/spec/properties/listeners/x-kubernetes-list-map-keys
value:
- port
- protocol
- op: add
path: /spec/versions/0/schema/openAPIV3Schema/properties/spec/properties/listeners/x-kubernetes-list-type
value: map
@komapa Would resolving https://github.com/hashicorp/consul-k8s/issues/3520 also solve this issue?
@komapa Would resolving #3520 also solve this issue?
Yes, I guess one of these will suffice but in general I am worrying that not being able to track CRDs separate from helm will be generally an anti-pattern for the consul helm chart. For our immediate needs, what of the two, yes :)