cluster-api
cluster-api copied to clipboard
:sparkles: ClusterClass patches: Allow matching all MachineDeploymentClasses
What this PR does / why we need it:
This PR adds a new feature to allow matching all "*" MachineDeploymentClasses.
- Match all if MachineDeploymentClass.Names is a "*".
matchResources:
machineDeploymentClass:
names : [*]
- Match some if explicitly mentioned.
matchResources:
machineDeploymentClass:
names: [something, something, something]
- Match some if glob matched.
matchResources:
machineDeploymentClass:
names: [something-*]
- Match none if no machineDeploymentClass is set (should produce a webhook failure)
matchResources:
machineDeploymentClass: {}
Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):
Fixes #5648
Hi @valaparthvi. Thanks for your PR.
I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test
on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.
Once the patch is verified, the new status will be reflected by the ok-to-test
label.
I understand the commands that are listed here.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
I think using *
which is already established in Kubernetes for ClusterRoles/Roles is better than using something like empty or no selector. It's way more explicit that way.
How would an empty selector look like for the names
slice that we have today?
matchResources:
machineDeploymentClass:
names:
- default-worker
How would an empty selector look like for the names slice that we have today?
matchResources:
machineDeploymentClass: {}
That's my understanding of how it normally works in Kubernetes with the label selector. It would require enabling it (and maybe even setting it) in the webhook.
I think it's more consistent to stick to how selectors work in Kubernetes rather than RBAC, despite it being implicit. Definitely not a blocker for implementing it as is with the '*' notation, but it does depart from convention IMO.
/ok-to-test
@killianmuldoon @sbueringer So, what have you decided? Are we going with K8s way or RBAC way? The K8s way does not seem very intuitive to me, or if that is the way it happens generally, it might be intuitive for others. RBAC on the other hand is quite simple.
Thought about it a bit more. I think Killian is right. It makes sense to stay consistent with Kubernetes selectors.
This would give us the following semantics:
Match specific MD classes
matchResources:
controlPlane: true
machineDeploymentClass:
names:
- default-worker
Match all MD classes
matchResources:
controlPlane: true
machineDeploymentClass: {}
Match none MD classes
matchResources:
controlPlane: true
We should not set the selector in the defaulting webhook. The idea in the ClusterClass API is that folks should explicitly decide to select InfraCluster, CP, MD classes, ... . If we would default to {}
the selector would match all MD classes per default and folks have to set some not-matching MD class names to still be able to not match MD classes.
(I think it comes down to in Kubernetes it's default: "match all", and in our cases the default is "match none")
One nice thing about {}
is that it works on the machineDeploymentClass
level. While *
would be part of names
. It seems better to have that match all
only once on the machineDeploymentClass
vs needing it in names
and every other filter critera we might add in the future.
Independent of that, I absolutely don't like what they did there it is way to implicit for my taste.
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign fabriziopandini for approval by writing /assign @fabriziopandini
in a comment. For more information see:The Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve
in a comment
Approvers can cancel approval by writing /approve cancel
in a comment
@sbueringer I can't seem to run unit tests for util/patch/patch_test.go locally, neither from my branch, and nor from main branch. I see they're failing on CI as well. Do I need to run some setup? I have a kind cluster running.
Here is the output from when I run the tests.
/usr/local/go/bin/go tool test2json -t /tmp/GoLand/___TestPatchHelper_in_sigs_k8s_io_cluster_api_util_patch.test -test.v -test.paniconexit0 -test.run ^\QTestPatchHelper\E$
E0727 18:57:10.811564 125075 logr.go:265] controller-runtime/test-env "msg"="unable to start the controlplane" "error"="fork/exec /usr/local/kubebuilder/bin/etcd: no such file or directory" "tries"=0
E0727 18:57:10.812226 125075 logr.go:265] controller-runtime/test-env "msg"="unable to start the controlplane" "error"="fork/exec /usr/local/kubebuilder/bin/etcd: no such file or directory" "tries"=1
E0727 18:57:10.812763 125075 logr.go:265] controller-runtime/test-env "msg"="unable to start the controlplane" "error"="fork/exec /usr/local/kubebuilder/bin/etcd: no such file or directory" "tries"=2
E0727 18:57:10.813390 125075 logr.go:265] controller-runtime/test-env "msg"="unable to start the controlplane" "error"="fork/exec /usr/local/kubebuilder/bin/etcd: no such file or directory" "tries"=3
E0727 18:57:10.813975 125075 logr.go:265] controller-runtime/test-env "msg"="unable to start the controlplane" "error"="fork/exec /usr/local/kubebuilder/bin/etcd: no such file or directory" "tries"=4
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x30 pc=0x14f55af]
goroutine 1 [running]:
sigs.k8s.io/controller-runtime/pkg/internal/testing/controlplane.(*APIServer).Stop(0x14)
/home/pvala/go/src/github.com/kubernetes-sigs/cluster-api/vendor/sigs.k8s.io/controller-runtime/pkg/internal/testing/controlplane/apiserver.go:424 +0x8f
sigs.k8s.io/controller-runtime/pkg/internal/testing/controlplane.(*ControlPlane).Stop(0xc0005af180)
/home/pvala/go/src/github.com/kubernetes-sigs/cluster-api/vendor/sigs.k8s.io/controller-runtime/pkg/internal/testing/controlplane/plane.go:97 +0x3d
sigs.k8s.io/controller-runtime/pkg/envtest.(*Environment).Stop(0xc0005af180)
/home/pvala/go/src/github.com/kubernetes-sigs/cluster-api/vendor/sigs.k8s.io/controller-runtime/pkg/envtest/server.go:194 +0x114
sigs.k8s.io/cluster-api/internal/test/envtest.newEnvironment({0x0, 0x0, 0xc000598600})
/home/pvala/go/src/github.com/kubernetes-sigs/cluster-api/internal/test/envtest/environment.go:209 +0x22e5
sigs.k8s.io/cluster-api/internal/test/envtest.Run({0x1de86b0, 0xc0004ccf40}, {0xc000184000, {0x0, 0x0, 0x0}, 0x0, 0x0, 0x1c2d2a8, {0x0, ...}})
/home/pvala/go/src/github.com/kubernetes-sigs/cluster-api/internal/test/envtest/environment.go:112 +0x66
sigs.k8s.io/cluster-api/util/patch.TestMain(0xc0000001a0)
/home/pvala/go/src/github.com/kubernetes-sigs/cluster-api/util/patch/suite_test.go:39 +0x66
main.main()
_testmain.go:51 +0x165
The issue seems to be your env test setup - do all the tests run when you run make test
? You may need to set the KUBEBUILDER_ASSETS
env variable
The issue seems to be your env test setup - do all the tests run when you run
make test
? You may need to set theKUBEBUILDER_ASSETS
env variable
I was actually running the tests directly from my GoLand editor, and I didn't realize I had to set the envvar, but it seems like make test
passed on both the branches.
I also noticed pull-cluster-api-verify-main is failing because I failed to update config/crd/bases/cluster.x-k8s.io_clusterclasses.yaml. Is there a Make target that I should run to update this yaml or will this be a manual change?
pull-cluster-api-verify-main failure
API rule violation: list_type_missing,sigs.k8s.io/cluster-api/exp/runtime/hooks/api/v1alpha1,GeneratePatchesResponseItem,Patch
API rule violation: list_type_missing,sigs.k8s.io/cluster-api/exp/runtime/hooks/api/v1alpha1,ValidateTopologyRequest,Items
API rule violation: list_type_missing,sigs.k8s.io/cluster-api/exp/runtime/hooks/api/v1alpha1,ValidateTopologyRequest,Variables
API rule violation: list_type_missing,sigs.k8s.io/cluster-api/exp/runtime/hooks/api/v1alpha1,ValidateTopologyRequestItem,Variables
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api'
diff --git a/api/v1beta1/zz_generated.openapi.go b/api/v1beta1/zz_generated.openapi.go
index 01c418b08..9868b1071 100644
--- a/api/v1beta1/zz_generated.openapi.go
+++ b/api/v1beta1/zz_generated.openapi.go
@@ -2715,7 +2715,7 @@ func schema_sigsk8sio_cluster_api_api_v1beta1_PatchSelectorMatch(ref common.Refe
},
"machineDeploymentClass": {
SchemaProps: spec.SchemaProps{
- Description: "MachineDeploymentClass selects templates referenced in specific MachineDeploymentClasses in .spec.workers.machineDeployments.",
+ Description: "MachineDeploymentClass selects templates referenced in specific MachineDeploymentClasses in .spec.workers.machineDeployments. Note: If set to an empty object it will match all MachineDeploymentClasses.",
Ref: ref("sigs.k8s.io/cluster-api/api/v1beta1.PatchSelectorMatchMachineDeploymentClass"),
},
},
diff --git a/config/crd/bases/cluster.x-k8s.io_clusterclasses.yaml b/config/crd/bases/cluster.x-k8s.io_clusterclasses.yaml
index d9f504c8a..b96e494dc 100644
--- a/config/crd/bases/cluster.x-k8s.io_clusterclasses.yaml
+++ b/config/crd/bases/cluster.x-k8s.io_clusterclasses.yaml
@@ -770,9 +770,10 @@ spec:
referenced in .spec.infrastructure.
+
set to an empty object it will match all MachineDeploymentClasses.'
properties:
names:
description: Names selects templates by class
generated files are out of date, run make generate
make: *** [Makefile:482: verify-gen] Error 1
+ EXIT_VALUE=2
The CR definitions are created using make generate
and its sub-targets. make generate
takes a long time to run, so you might want to just run make generate-manifests
and maybe make generate-openapi
Note - it's not a good idea to run make verify-gen
locally as it runs the full make generate
if you're in doubt, you can just run the full job (takes about 20 minutes for me locally, but heavily depends on setup and OS.
The CR definitions are created using
make generate
and its sub-targets.make generate
takes a long time to run, so you might want to just runmake generate-manifests
and maybemake generate-openapi
Note - it's not a good idea to run
make verify-gen
locally as it runs the fullmake generate
if you're in doubt, you can just run the full job (takes about 20 minutes for me locally, but heavily depends on setup and OS.
Thank you for this! And I think it's make generate-go-openapi
, and not make generate-openapi
, I couldn't find that target.
I am not sure how to fix pull-cluster-api-verify-main job. I've run a different make targets locally, but I seem to have reached nowhere. I've run make generate
, which modifies a few files, none of which contains the file in question(api/v1beta1/zz_generated.openapi.go). I've also run make generate-go-openapi
which seems to be doing nothing.
O/P from `make generate`
git status
On branch 5648-allow-matching-all-mdc
Changes to be committed:
(use "git restore --staged <file>..." to unstage)
renamed: api/v1alpha3/zz_generated.conversion.go -> github.com/kubernetes-sigs/cluster-api/api/v1alpha3/zz_generated.conversion.go
renamed: api/v1alpha4/zz_generated.conversion.go -> github.com/kubernetes-sigs/cluster-api/api/v1alpha4/zz_generated.conversion.go
renamed: bootstrap/kubeadm/api/v1alpha3/zz_generated.conversion.go -> github.com/kubernetes-sigs/cluster-api/bootstrap/kubeadm/api/v1alpha3/zz_generated.conversion.go
renamed: bootstrap/kubeadm/api/v1alpha4/zz_generated.conversion.go -> github.com/kubernetes-sigs/cluster-api/bootstrap/kubeadm/api/v1alpha4/zz_generated.conversion.go
renamed: bootstrap/kubeadm/types/upstreamv1beta1/zz_generated.conversion.go -> github.com/kubernetes-sigs/cluster-api/bootstrap/kubeadm/types/upstreamv1beta1/zz_generated.conversion.go
renamed: bootstrap/kubeadm/types/upstreamv1beta2/zz_generated.conversion.go -> github.com/kubernetes-sigs/cluster-api/bootstrap/kubeadm/types/upstreamv1beta2/zz_generated.conversion.go
renamed: bootstrap/kubeadm/types/upstreamv1beta3/zz_generated.conversion.go -> github.com/kubernetes-sigs/cluster-api/bootstrap/kubeadm/types/upstreamv1beta3/zz_generated.conversion.go
renamed: controlplane/kubeadm/api/v1alpha3/zz_generated.conversion.go -> github.com/kubernetes-sigs/cluster-api/controlplane/kubeadm/api/v1alpha3/zz_generated.conversion.go
renamed: controlplane/kubeadm/api/v1alpha4/zz_generated.conversion.go -> github.com/kubernetes-sigs/cluster-api/controlplane/kubeadm/api/v1alpha4/zz_generated.conversion.go
renamed: exp/addons/api/v1alpha3/zz_generated.conversion.go -> github.com/kubernetes-sigs/cluster-api/exp/addons/api/v1alpha3/zz_generated.conversion.go
renamed: exp/addons/api/v1alpha4/zz_generated.conversion.go -> github.com/kubernetes-sigs/cluster-api/exp/addons/api/v1alpha4/zz_generated.conversion.go
renamed: exp/api/v1alpha3/zz_generated.conversion.go -> github.com/kubernetes-sigs/cluster-api/exp/api/v1alpha3/zz_generated.conversion.go
renamed: exp/api/v1alpha4/zz_generated.conversion.go -> github.com/kubernetes-sigs/cluster-api/exp/api/v1alpha4/zz_generated.conversion.go
renamed: internal/runtime/test/v1alpha1/zz_generated.conversion.go -> github.com/kubernetes-sigs/cluster-api/internal/runtime/test/v1alpha1/zz_generated.conversion.go
new file: test/infrastructure/docker/github.com/kubernetes-sigs/cluster-api/test/infrastructure/docker/api/v1alpha3/zz_generated.conversion.go
new file: test/infrastructure/docker/github.com/kubernetes-sigs/cluster-api/test/infrastructure/docker/api/v1alpha4/zz_generated.conversion.go
new file: test/infrastructure/docker/github.com/kubernetes-sigs/cluster-api/test/infrastructure/docker/exp/api/v1alpha3/zz_generated.conversion.go
new file: test/infrastructure/docker/github.com/kubernetes-sigs/cluster-api/test/infrastructure/docker/exp/api/v1alpha4/zz_generated.conversion.go
Can someone help me fix this? @killianmuldoon
@valaparthvi When I run make generate-go-openapi
locally I get changes in api/v1beta1/zz_generated.openapi.go
. Could you share some information about the environment in which you're running the command? e.g. cluster api most recent commit (before your changes), os type and kernel version, and the directory in which you're running the make command.
i.e. the output of:
-
git log --oneline | head -n 5
-
go env | grep 'GOOS\|GOARCH\|GOVERSION\|GOPATH'
- The output of
pwd
in the directory you're running the make commands.
Note the above will likely only work for Mac / Linux. If you're in a Windows environment you may need to find a different path to providing that information.
I ran make generate-go-openapi
but it did not make any change to fix the failure.
Here's the o/p from when I ran it
make generate-go-openapi
GOBIN=/home/pvala/go/src/github.com/kubernetes-sigs/cluster-api/hack/tools/bin ./scripts/go_install.sh k8s.io/kube-openapi/cmd/openapi-gen openapi-gen 5e7f5fd
make[1]: Entering directory '/home/pvala/go/src/github.com/kubernetes-sigs/cluster-api/tmp'
make[1]: *** No rule to make target 'clean-generated-openapi-definitions'. Stop.
make[1]: Leaving directory '/home/pvala/go/src/github.com/kubernetes-sigs/cluster-api/tmp'
** Generating openapi schema for types in ./api/v1beta1 **
API rule violation: list_type_missing,sigs.k8s.io/cluster-api/api/v1beta1,ClusterClassPatch,Definitions
API rule violation: list_type_missing,sigs.k8s.io/cluster-api/api/v1beta1,ClusterClassSpec,Patches
API rule violation: list_type_missing,sigs.k8s.io/cluster-api/api/v1beta1,ClusterClassSpec,Variables
API rule violation: list_type_missing,sigs.k8s.io/cluster-api/api/v1beta1,JSONSchemaProps,Enum
API rule violation: list_type_missing,sigs.k8s.io/cluster-api/api/v1beta1,JSONSchemaProps,Required
API rule violation: list_type_missing,sigs.k8s.io/cluster-api/api/v1beta1,MachineDeploymentVariables,Overrides
API rule violation: list_type_missing,sigs.k8s.io/cluster-api/api/v1beta1,MachineHealthCheckClass,UnhealthyConditions
API rule violation: list_type_missing,sigs.k8s.io/cluster-api/api/v1beta1,MachineHealthCheckSpec,UnhealthyConditions
API rule violation: list_type_missing,sigs.k8s.io/cluster-api/api/v1beta1,MachineHealthCheckStatus,Targets
API rule violation: list_type_missing,sigs.k8s.io/cluster-api/api/v1beta1,NetworkRanges,CIDRBlocks
API rule violation: list_type_missing,sigs.k8s.io/cluster-api/api/v1beta1,PatchDefinition,JSONPatches
API rule violation: list_type_missing,sigs.k8s.io/cluster-api/api/v1beta1,PatchSelectorMatchMachineDeploymentClass,Names
API rule violation: list_type_missing,sigs.k8s.io/cluster-api/api/v1beta1,Topology,Variables
API rule violation: list_type_missing,sigs.k8s.io/cluster-api/api/v1beta1,WorkersClass,MachineDeployments
API rule violation: list_type_missing,sigs.k8s.io/cluster-api/api/v1beta1,WorkersTopology,MachineDeployments
make[1]: Entering directory '/home/pvala/go/src/github.com/kubernetes-sigs/cluster-api/tmp'
make[1]: *** No rule to make target 'clean-generated-openapi-definitions'. Stop.
make[1]: Leaving directory '/home/pvala/go/src/github.com/kubernetes-sigs/cluster-api/tmp'
** Generating openapi schema for types in ./exp/runtime/hooks/api/v1alpha1 **
API rule violation: list_type_missing,sigs.k8s.io/cluster-api/exp/runtime/hooks/api/v1alpha1,GeneratePatchesRequest,Items
API rule violation: list_type_missing,sigs.k8s.io/cluster-api/exp/runtime/hooks/api/v1alpha1,GeneratePatchesRequest,Variables
API rule violation: list_type_missing,sigs.k8s.io/cluster-api/exp/runtime/hooks/api/v1alpha1,GeneratePatchesRequestItem,Variables
API rule violation: list_type_missing,sigs.k8s.io/cluster-api/exp/runtime/hooks/api/v1alpha1,GeneratePatchesResponse,Items
API rule violation: list_type_missing,sigs.k8s.io/cluster-api/exp/runtime/hooks/api/v1alpha1,GeneratePatchesResponseItem,Patch
API rule violation: list_type_missing,sigs.k8s.io/cluster-api/exp/runtime/hooks/api/v1alpha1,ValidateTopologyRequest,Items
API rule violation: list_type_missing,sigs.k8s.io/cluster-api/exp/runtime/hooks/api/v1alpha1,ValidateTopologyRequest,Variables
API rule violation: list_type_missing,sigs.k8s.io/cluster-api/exp/runtime/hooks/api/v1alpha1,ValidateTopologyRequestItem,Variables
$ git status
On branch 5648-allow-matching-all-mdc
nothing to commit, working tree clean
Is your project clones outside the GOPATH ?
No. It is within the GOPATH.
$ git log --oneline | head -n 5
0142e0b18 Update internal/controllers/topology/cluster/patches/inline/json_patch_generator_test.go
e0bdc2455 Fix crd API docs
25b6cd93b ykakarap's review: add unit test to match .PatchSelectorMachineDeploymentClass.Names to an empty object, and a note about it
206e507c2 requested changes
989c45e10 Update internal/controllers/topology/cluster/patches/inline/json_patch_generator.go
$ go env | grep 'GOOS\|GOARCH\|GOVERSION\|GOPATH'
GOARCH="amd64"
GOOS="linux"
GOPATH="/home/pvala/go"
GOVERSION="go1.17.12"
$ pwd
/home/pvala/go/src/github.com/kubernetes-sigs/cluster-api
I'm really not sure what the base issue is here at this point 🤔, and I don't know what the next steps to look for a solution are. Just as a sanity check can you rebase on main and try to run the make generate targets again? Once that's done I think we should focus on the substance of the PR. We should be able to fix up the generated OpenAPI stuff on your branch before it's time to merge.
Same result with the rebase.
Here's the output from `git status` after I ran `make generate`
git status
On branch 5648-allow-matching-all-mdc
Changes not staged for commit:
(use "git add/rm <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
deleted: api/v1alpha3/zz_generated.conversion.go
deleted: api/v1alpha4/zz_generated.conversion.go
deleted: bootstrap/kubeadm/api/v1alpha3/zz_generated.conversion.go
deleted: bootstrap/kubeadm/api/v1alpha4/zz_generated.conversion.go
deleted: bootstrap/kubeadm/types/upstreamv1beta1/zz_generated.conversion.go
deleted: bootstrap/kubeadm/types/upstreamv1beta2/zz_generated.conversion.go
deleted: bootstrap/kubeadm/types/upstreamv1beta3/zz_generated.conversion.go
deleted: controlplane/kubeadm/api/v1alpha3/zz_generated.conversion.go
deleted: controlplane/kubeadm/api/v1alpha4/zz_generated.conversion.go
deleted: exp/addons/api/v1alpha3/zz_generated.conversion.go
deleted: exp/addons/api/v1alpha4/zz_generated.conversion.go
deleted: exp/api/v1alpha3/zz_generated.conversion.go
deleted: exp/api/v1alpha4/zz_generated.conversion.go
deleted: internal/runtime/test/v1alpha1/zz_generated.conversion.go
Untracked files:
(use "git add <file>..." to include in what will be committed)
github.com/
test/infrastructure/docker/github.com/
I am not sure about the 2 new folders added.
Can you show what the rest of the path is under those newly created github directories?
Could you try to checkout/move the repository to /home/pvala/go/src/sigs.k8s.io/cluster-api
instead of /home/pvala/go/src/github.com/kubernetes-sigs/cluster-api
?
Maybe this fixes the generate 🤔
Could you try to checkout/move the repository to
/home/pvala/go/src/sigs.k8s.io/cluster-api
instead of/home/pvala/go/src/github.com/kubernetes-sigs/cluster-api
?Maybe this fixes the generate thinking
Surprisingly, that did it.
Edit: Or, well looking at the Makefile now, not so surprisingly.
There is code in Makefile that should make this work even if the repo is cloned outside the GOPATH
. I wonder why it did not work.
Ref: https://github.com/kubernetes-sigs/cluster-api/blob/2fc48a831e444981cbdc52c0d41b19571235e90b/Makefile#L65-L71
There is code in Makefile that should make this work even if the repo is cloned outside the GOPATH. I wonder why it did not work.
Yeah - I tried this locally using a similar path and it worked fine 🤔
This is shaping up nicely! There's still a change needed in the webhook to enable creating ClusterClasses with the empty selector (and a test to show it's working)
The relevant code is here: https://github.com/kubernetes-sigs/cluster-api/blob/998bc05c71d588109a426ec4019bf73612659c0a/internal/webhooks/patch_validation.go#L158-L161
My preference would be for the webhook to reject
matchResources:
machineDeploymentClass:
name: []
And accept
matchResources:
machineDeploymentClass: {}
I think that cuts down on the complexity of the API. The empty name list is another added layer of confusion IMO.
@killianmuldoon @ykakarap Just to be clear, you both are suggesting addition of one more condition check, right? The existing checks and tests should stay as is?
And I'm sorry for the delayed responses :raised_hands:
I think we should add some tests to the webhook to cover the following cases:
- Matching all MDs works to select multiple MachineDeployments when the selector is empty i.e. {}
- Setting the selector to {Names:[]} is rejected - returns an error
I think we should add some tests to the webhook to cover the following cases:
1. Matching all MDs works to select multiple MachineDeployments when the selector is empty i.e. {} 2. Setting the selector to {Names:[]} is rejected - returns an error
Right. I postponed writing the tests until one of you approved the code changes.
@valaparthvi @killianmuldoon I've updated the PR description to reflect what we want to achieve. Am I correct?
@valaparthvi @killianmuldoon I've updated the PR description to reflect what we want to achieve. Am I correct?
Would be really great if someone can test this with a real Cluster/ClusterClass once we have the final version of the validation. Just to ensure our assumptions about YAML/JSON umarshalling are correct and all the cases behave as expected
/lgtm
This is working as intended - thanks @valaparthvi for following this one through!
Could you squash your commits?
/lgtm
This is working as intended - thanks @valaparthvi for following this one through!
Thank you all for quick reviews and patience :)
Could you squash your commits?
Done
Getting on this after PTO @vincepri could you take a look at the API (how do we express the patch applies to all MachineDeploymentClasses)