cluster-api-operator
cluster-api-operator copied to clipboard
:sparkles: Handle config secret updates
What this PR does / why we need it:
As a cluster operator, I want to iterate my infrastructure provisioner configuration and ensure the relevant cluster-api-providers are provisioned as configured.
Currently, the GenericProvider Reconciler considers exclusively the Providers spec and ignores the data pointed by the ConfigSecret field.
If one of those fields changes, the deployment of the provider is unchanged.
For example, if an infrastructure is defined as
apiVersion: operator.cluster.x-k8s.io/v1alpha2
kind: InfrastructureProvider
metadata:
name: aws
namespace: infrastructure-aws-system
spec:
version: v2.5.2
configSecret:
name: aws-variables
deployment:
replicas: 1
---
apiVersion: v1
kind: Secret
metadata:
name: aws-variables
namespace: capi-config
type: Opaque
stringData:
AWS_B64ENCODED_CREDENTIALS: "SOME_BASE_64_CREDS"
It is impossible to get the latest version of
AWS_B64ENCODED_CREDENTIALS without changing the provisioner version or
the deployments or verbosity level
This commit proposes to also take into consideration the content of the configuration so, any change to it leads to an adjustment of the deployment.
Currently, if a ConfigSecret is updated, it is not reflected automatically to all providers using it. Add an optional reconciler to trigger the update of all providers using it.
Both combined ensures that any configuration update leads to an update of the provider deployment, with the least changes in behaviour possible
Welcome @tjamet!
It looks like this is your first PR to kubernetes-sigs/cluster-api-operator 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.
You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.
You can also check if kubernetes-sigs/cluster-api-operator has its own contribution guidelines.
You may want to refer to our testing guide if you run into trouble with your tests not passing.
If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!
Thank you, and welcome to Kubernetes. :smiley:
Hi @tjamet. Thanks for your PR.
I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.
Once the patch is verified, the new status will be reflected by the ok-to-test label.
I understand the commands that are listed here.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
Deploy Preview for kubernetes-sigs-cluster-api-operator ready!
| Name | Link |
|---|---|
| Latest commit | 57622817a41f99e8d3ee630744c2db30677f2b7c |
| Latest deploy log | https://app.netlify.com/sites/kubernetes-sigs-cluster-api-operator/deploys/66d97ecb7d8e1b0008257425 |
| Deploy Preview | https://deploy-preview-565--kubernetes-sigs-cluster-api-operator.netlify.app |
| Preview on mobile | Toggle QR Code...Use your smartphone camera to open QR code link. |
To edit notification comments on pull requests, go to your Netlify site configuration.
/ok-to-test
Digging into the failing test code, I don't understand how the proposed changes can influence the status.
the failing test (TestCheckCAPIOpearatorAvailability seems to test that deployments are working as expected.
It does create a deployment object running nginx with generateCAPIOperatorDeployment, update its status to consider it is running, and ensure that CheckDeploymentAvailability considers the deployment status as expected.
I can see the test failing similarly in a dependabot PR, leaning towards instability.
/retest
LGTM label has been added.
/lgtm cc @furkatgofurov7 @alexander-demicev
LGTM label has been added.
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: furkatgofurov7
The full list of commands accepted by this bot can be found here.
The pull request process is described here
- ~~OWNERS~~ [furkatgofurov7]
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
/lgtm
LGTM label has been added.