cluster-api-operator
cluster-api-operator copied to clipboard
CRDs aren't inside the `crds` folder
Anything else you would like to add:
This goes against helm best-practices. I understand that for non-gitops workflows having the CRDs inside the templates
folder (with an if
!) is necessary, but they should still be inside the crds
folder for normal operations, like not needing post-*
hooks for creating CRs, and gitops for CRD upgrades
Environment:
- Cluster-api-operator version: N/A
- Cluster-api version: N/A
- Minikube/KIND version: N/A
- Kubernetes version: (use
kubectl version
): N/A - OS (e.g. from
/etc/os-release
): N/A - Helm Chart version: 0.6.0
/kind bug [One or more /area label. See https://github.com/kubernetes-sigs/cluster-api-operator/labels?q=area for the list of labels]
This issue is currently awaiting triage.
If CAPI Operator contributors determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Hi, thanks for opening the issue. Unfortunately, it's not possible to upgrade CRDs if they are located in the CRD folder https://helm.sh/docs/chart_best_practices/custom_resource_definitions/#some-caveats-and-explanations
Hi, thanks for opening the issue. Unfortunately, it's not possible to upgrade CRDs if they are located in the CRD folder helm.sh/docs/chart_best_practices/custom_resource_definitions/#some-caveats-and-explanations
Yes and no, for manual workflows, like helm upgrade
, it's not easily possible (although if you're already doing manual stuff, you might as well just apply
the CRDs from the crds
folder 🤷)
But using gitops it's easily possible, see https://fluxcd.io/flux/components/helm/helmreleases/#crds
Ideally, we want to support helm upgrade
for upgrading all manifests of the operator. Would it help if we split into 2 different charts: one for CRDs and a second one for all the rest?
@damdo @Fedosin @furkatgofurov7 ^^
Ideally, we want to support
helm upgrade
for upgrading all manifests of the operator.
I understand, but doing it this way breaks other stuff, even inside the chart itself, as shown in #283
Currently, you can't really use this chart in a gitops way because of this CRD stuff. And even the manual way is broken, although I guess that could have another cause.
If the CRDs were inside the crds
folder, even if only additionally, this would work.
Only if cert-manager
fixes https://github.com/cert-manager/cert-manager/issues/6179 first, but this is a two-part problem after all.
Would it help if we split into 2 different charts: one for CRDs and a second one for all the rest?
Not really, the problem is explicitly that the CRDs aren't in the crds
folder, extracting them into another chart wouldn't help and I'd just open an issue for that chart 😅
EDIT: maybe miscommunication, but it's ok to have the CRDs in another chart, as long as it's in the crds
folder, kube-prometheus-stack
for example does it this way.
We also have a use case where we might template CRDs, some users would like to avoid using cert-manager and prefer a custom solution, that will require removing/replacing cert-manager annotation from CRDs. At this point, I'm not sure how to support all options without breaking others :(
We also have a use case where we might template CRDs
What are you templating? 🤔
some users would like to avoid using cert-manager and prefer a custom solution, that will require removing/replacing cert-manager annotation from CRDs
Exactly this isn't even possible, as the cert-manager CRDs are always created
Annotations on CRDs? Are you sure you don't mean on CRs?
Yes, we set CA injector annotation on the CRD itself https://cert-manager.io/docs/concepts/ca-injector/
Yes, we set CA injector annotation on the CRD itself cert-manager.io/docs/concepts/ca-injector
For that special use-case I would, from your perspective, just decide on a name and hardcode that. If the end user doesn't want to use the included cert-manager and supply their own they just have to conform to that. And if they want to handle things without cert-manager, then that's just a not-supported configuration and they have to figure things out for themselves 😅 K8s is just not flexible in that regard 🤷
IMHO one should mainly support the best ways, which is of course subjective, and the most wanted and easy to achieve other ways.
And in my opinion easy handling of CRs, following best-practices and going for gitops are the preferred way.
Also having the templated CRDs, with an if
, is a low hanging fruit for those poor souls that don't already use gitops 😉
Hi all, flux user too, i confirm that with crd in template and not in crds folder, we can have capi-operator install but the helm hooks failed to create providers (defined in values.yaml -core/bootstrap/controlPlane/infrastructure).
{"level":"error","ts":"2023-12-06T10:37:14.003Z","msg":"Reconciler error","controller":"helmrelease","controllerGroup":"helm.toolkit.fluxcd.io","controllerKind":"HelmRelease","HelmRelease":{"name":"capi-operator","namespace":"capi-operator-system"},"namespace":"capi-operator-system","name":"capi-operator","reconcileID":"8d7c96e8-16b3-4b3f-95a6-7d81c26665f5","error":"Helm install failed: failed post-install: unable to build kubernetes object for deleting hook cluster-api-operator/templates/core.yaml: resource mapping not found for name: \"cluster-api\" namespace: \"capi-system\" from \"\": no matches for kind \"CoreProvider\" in version \"operator.cluster.x-k8s.io/v1alpha2\"\nensure CRDs are installed first"}
As cwrau says, flux can create/replace crds https://fluxcd.io/flux/components/helm/helmreleases/#crds
So I wonder about the benefit of using the capi operator rather than clusterctl ?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
What is the current status of this issue?
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
Yes, we set CA injector annotation on the CRD itself cert-manager.io/docs/concepts/ca-injector
For that special use-case I would, from your perspective, just decide on a name and hardcode that. If the end user doesn't want to use the included cert-manager and supply their own they just have to conform to that. And if they want to handle things without cert-manager, then that's just a not-supported configuration and they have to figure things out for themselves 😅 K8s is just not flexible in that regard 🤷
Or you could do the following;
{{- $crdNames := list -}}
{{- range $path, $_ := .Files.Glob "crds/*.yaml" -}}
{{- $manifests := $.Files.Get $path | splitList "---" -}}
{{- range $manifest := $manifests -}}
{{- $crd := $manifest | fromYaml -}}
{{- $crdName := dig "metadata" "name" "" $crd -}}
{{- if $crdName -}}
{{- $crdNames = append $crdNames $crdName -}}
{{- end -}}
{{- end -}}
{{- end -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ printf "%s-patch-crds" (include "common.names.fullname" .) }}
namespace: {{ .Release.Namespace }}
labels: {{- include "common.labels.standard" . | nindent 4 }}
annotations:
helm.sh/hook: post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ printf "%s-patch-crds" (include "common.names.fullname" .) }}
namespace: {{ .Release.Namespace }}
labels: {{- include "common.labels.standard" . | nindent 4 }}
annotations:
helm.sh/hook: post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
rules:
- verbs:
- get
- patch
apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
resourceNames: {{- $crdNames | toYaml | nindent 6 }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ printf "%s-patch-crds" (include "common.names.fullname" .) }}
namespace: {{ .Release.Namespace }}
labels: {{- include "common.labels.standard" . | nindent 4 }}
annotations:
helm.sh/hook: post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ printf "%s-patch-crds" (include "common.names.fullname" .) }}
subjects:
- kind: ServiceAccount
name: {{ printf "%s-patch-crds" (include "common.names.fullname" .) }}
namespace: {{ .Release.Namespace }}
---
apiVersion: batch/v1
kind: Job
metadata:
name: {{ printf "%s-patch-crds" (include "common.names.fullname" .) }}
namespace: {{ .Release.Namespace }}
labels: {{- include "common.labels.standard" . | nindent 4 }}
annotations:
helm.sh/hook: post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
spec:
template:
metadata:
labels: {{- include "common.labels.standard" . | nindent 8 }}
spec: {{- include "common.images.renderPullSecrets" (dict "images" (list .Values.kubectl.image) "context" $) | nindent 6 }}
restartPolicy: Never
automountServiceAccountToken: true
serviceAccountName: {{ printf "%s-patch-crds" (include "common.names.fullname" .) }}
containers:
- name: patch-crds
image: {{ include "common.images.image" (dict "imageRoot" .Values.kubectl.image "global" .Values.global) }}
imagePullPolicy: {{ empty .Values.kubectl.image.digest | ternary "Always" "IfNotPresent" }}
securityContext:
allowPrivilegeEscalation: false
seccompProfile:
type: RuntimeDefault
runAsNonRoot: true
runAsGroup: 1000
runAsUser: 1000
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
resources: {{- include "common.resources" .Values.kubectl.resources | nindent 12 }}
command:
- bash
- -ex
- -c
- |
for crd in {{ $crdNames | join " " }}; do
kubectl annotate crd "$crd" --overwrite {{ printf "cert-manager.io/inject-ca-from=%s" (printf "%s/%s-%s" .Release.Namespace (include "common.names.fullname" $) "webhook-serving-certificate") }}
done
which is what we're using internally for our operator, controller-gen creates the crds in the crds
folder and this job patches the cert-manager annotation