🧑🏼🌾 Deploy `gardenlet`s Through Custom Resource Via `gardener-operator`
How to categorize this issue?
/area ops-productivity /kind enhancement
What would you like to be added:
gardener-operator should also be able to deploy gardenlets to target seed clusters (unmanaged seeds, sometimes also known as "soils"). Today, human operators have to manually deploy the gardenlet Helm chart there. Instead, we could add a new Gardenlet CRD to the operator.gardener.cloud/v1alpha1 API:
apiVersion: operator.gardener.cloud/v1alpha1
kind: Gardenlet
metadata:
name: local
spec:
# kubeconfigSecretRef:
# name: kubeconfig-to-target-cluster
# namespace: garden
gardenlet:
bootstrap: BootstrapToken
config:
apiVersion: gardenlet.config.gardener.cloud/v1alpha1
kind: GardenletConfiguration
seedConfig:
spec:
# insert seed specification here
gardener-operator obviously needs a kubeconfig for the target seed cluster, and all of this only works when it can reach it network-wise. The .spec.kubeconfigSecretRef is optional, and when not provided, gardenlet is getting deployed into the runtime cluster of gardener-operator.
We already have the needed code in gardenlet's ManagedSeed controller and just need to make it reusable.
PoC code is in https://github.com/metal-stack/gardener/tree/hackathon-operator-gardenlet
Why is this needed:
gardener-operator can only deploy the Gardener control plane components (API server, controller-manager, etc.). gardenlets must be deployed manually to target seed clusters (typically, via the Helm chart). When the gardener-operator can reach such seed clusters network-wise, it should be possible to make it easily deploy gardenlets via a new operator.gardener.cloud/v1alpha1.Gardenlet custom resource.
Is there a plan for supporting seeds that are not reachable network-wise from the cluster hosting the gardener control plane?
Not really - we discussed this scenario once with @timuthy. We could deploy gardener-operator and its Gardenlet CRD also to such seeds, create a Gardenlet CR and let it deploy the gardenlet. However, this is not much better compared to directly deploying the gardenlet Helm chart into the cluster since gardener-operator still requires access to the virtual garden (where does this come from)?
The Gardener project currently lacks enough active contributors to adequately respond to all issues. This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Mark this issue as rotten with
/lifecycle rotten - Close this issue with
/close
/lifecycle stale
/remove-lifecycle stale
The Gardener project currently lacks enough active contributors to adequately respond to all issues. This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Mark this issue as rotten with
/lifecycle rotten - Close this issue with
/close
/lifecycle stale
/remove-lifecycle stale
After https://github.com/gardener-community/hackathon/blob/main/2024-05_Schelklingen/README.md#-gardenlet-self-upgrades-for-unmanaged-seeds and now with https://github.com/gardener/gardener/issues/9830, I think it does not make sense to proceed further with this item. gardener-operator managing gardenlets does not work in all cases (especially, if the runtime cluster (where gardener-operator runs) does not have network connectivity to the target clusters into which a gardenlet shall be deployed).
The approach in https://github.com/gardener/gardener/issues/9830 is more generic and works for all scenarios. It only requires a single, manual installation of gardenlet during initial seed registration, but from then on, it will take care of updating itself when the respective Gardenlet resource is properly maintained in the garden cluster.
/close not-planned
@rfranzke: Closing this issue, marking it as "Not Planned".
In response to this:
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
After https://github.com/gardener-community/hackathon/blob/main/2024-05_Schelklingen/README.md#-gardenlet-self-upgrades-for-unmanaged-seeds and now with #9830, I think it does not make sense to proceed further with this item.
gardener-operatormanaginggardenlets does not work in all cases (especially, if the runtime cluster (wheregardener-operatorruns) does not have network connectivity to the target clusters into which agardenletshall be deployed).The approach in #9830 is more generic and works for all scenarios. It only requires a single, manual installation of
gardenletduring initial seed registration, but from then on, it will take care of updating itself when the respectiveGardenletresource is properly maintained in the garden cluster.
After another discussion with @timuthy, we decided to /reopen this issue and get it implemented.
Even though the approach in #9830 is more generic, we can assume that gardener-operator indeed has network connectivity to the target seed clusters for the vast majority of use-cases. In this light, it would be very much more convenient for users if they could simply apply a resource to the garden cluster in order to materialize the gardenlet. From then on, we can rely on the self-upgrades (#9830), i.e., gardener-operator would only be responsible for the very first deployment. This should not only help improve user experience, but also makes e2e testing easier.
Without it, users would have to follow the docs and create a bootstrap token, craft a bootstrap kubeconfig, prepare a Helm values file, and then deploy the Helm chart.
If there is no network connectivity, we (or rather gardener-operator) cannot help, of course. A human operator would have to follow these steps at least once (and can then rely on the self-upgrades). However, as said, in the vast majority of scenarios, network connectivity exists, so we can simplify all this a bit with little effort (we already have the code and just need to wire it into gardener-operator).
/assign
@rfranzke: Reopened this issue.
In response to this:
After https://github.com/gardener-community/hackathon/blob/main/2024-05_Schelklingen/README.md#-gardenlet-self-upgrades-for-unmanaged-seeds and now with #9830, I think it does not make sense to proceed further with this item.
gardener-operatormanaginggardenlets does not work in all cases (especially, if the runtime cluster (wheregardener-operatorruns) does not have network connectivity to the target clusters into which agardenletshall be deployed).The approach in #9830 is more generic and works for all scenarios. It only requires a single, manual installation of
gardenletduring initial seed registration, but from then on, it will take care of updating itself when the respectiveGardenletresource is properly maintained in the garden cluster.After another discussion with @timuthy, we decided to /reopen this issue and get it implemented.
Even though the approach in #9830 is more generic, we can assume that
gardener-operatorindeed has network connectivity to the target seed clusters for the vast majority of use-cases. In this light, it would be very much more convenient for users if they could simply apply a resource to the garden cluster in order to materialize thegardenlet. From then on, we can rely on the self-upgrades (#9830), i.e.,gardener-operatorwould only be responsible for the very first deployment. This should not only help improve user experience, but also makes e2e testing easier.Without it, users would have to follow the docs and create a bootstrap token, craft a bootstrap kubeconfig, prepare a Helm values file, and then deploy the Helm chart.
If there is no network connectivity, we (or rather
gardener-operator) cannot help, of course. A human operator would have to follow these steps at least once (and can then rely on the self-upgrades). However, as said, in the vast majority of scenarios, network connectivity exists, so we can simplify all this a bit with little effort (we already have the code and just need to wire it intogardener-operator)./assign
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
Without it, users would have to follow the docs and create a bootstrap token, craft a bootstrap kubeconfig, prepare a Helm values file, and then deploy the Helm chart.
We recently went through implementing this in flux when preparing our migration to gardener-operator. And I agree that this is a great candidate for automating it in gardener-operator itself instead of in the deployment mechanism around it, as every installation needs it and the code is almost there already.
Thanks for the feedback 👍🏻
All tasks have been completed. /close
@rfranzke: Closing this issue.
In response to this:
All tasks have been completed. /close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.