cluster-api
cluster-api copied to clipboard
:sparkles: Dynamic CRS templating
What this PR does / why we need it:
This adds templating to CRS objects as they are being inserted into the remote cluster.
Standard Go templating is used, and cluster is made available as the context, along with a couple of helper functions to help with getting annotations and labels from the cluster.
This can be useful if you want to create objects in the remote cluster that are dynamic, based on the cluster that has been created.
Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #
The committers are authorized under a signed CLA.
- :white_check_mark: Kevin McDermott (f310fe9c3f5cd8e27a730f9af62e324096a2db34, 94205c7e722775b86b513c6905f22b66d8dd69c2)
Hi @bigkevmcd. Thanks for your PR.
I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.
Once the patch is verified, the new status will be reflected by the ok-to-test label.
I understand the commands that are listed here.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by:
To complete the pull request process, please assign enxebre after the PR has been reviewed.
You can assign the PR to them by writing /assign @enxebre in a comment when ready.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
The values which can be templated and their paths become part of the API. In this PR we're passing in the whole cluster object. Some thoughts:
Do we need to pin the version of the cluster object to the version of the ClusterResourceSet? If the CRS is v1beta1 but the cluster is v1, do we downgrade the cluster to v1beta1 before templating?
Do we gain anything here?
Are all parts of the cluster object safe for templating? For example, if a user can manipulate a cloud to produce a particular error message in Status.FailureMessage, can they use that to alter the structure of the rendered object in a manner which isn't safe?
I'd propose to simplify it to just the ObjectMeta and Spec fields.
The data parameter could be altered to...
map[string]interface{}{
"cluster": map[string]interface{}{
"meta": cl.ObjectMeta,
"spec": cl.Spec,
}
}
Honestly tho', my primary use-case is extracting data that was placed as annotations/labels on the Cluster specifically to configure the CRS that is placed into the remote cluster, so I could do without the Spec.
The above data would result in templating that was more explicit, {{ cluster.meta.Name }} for example to grab the cluster name, but could also use the label and annotation functions that are provided e.g. {{ annotation "example.com/testing" }}.
If we went this route it's probably worth reducing the surface of issues like these by creating a new struct which contains values intended for templating.
Does the simplified data above work for you?
While I could "simplify" the ObjectMeta and Spec further, it feels like the cost of maintaining the mapping from fields in the Cluster to fields in the provided data map, would be significant (both technically, and from a UX perspective).
- Changes to permitted templated values are isolated, easier to document, and easier to reason about across upgrades.
- Permitted templated values can be restricted to values which are known to be safe.
I think this addresses this issue?
The values which can be templated and their paths become part of the API. In this PR we're passing in the whole cluster object. Some thoughts: Do we need to pin the version of the cluster object to the version of the ClusterResourceSet? If the CRS is v1beta1 but the cluster is v1, do we downgrade the cluster to v1beta1 before templating?
Do we gain anything here?
I was thinking more about if we lose something. For example, if we remove a field in v1 our template which is expecting fields from v1beta1 may explode. If we handle templated data separately we have an opportunity to manage things like this explicitly.
Are all parts of the cluster object safe for templating? For example, if a user can manipulate a cloud to produce a particular error message in Status.FailureMessage, can they use that to alter the structure of the rendered object in a manner which isn't safe?
I'd propose to simplify it to just the
ObjectMetaandSpecfields.The data parameter could be altered to...
map[string]interface{}{ "cluster": map[string]interface{}{ "meta": cl.ObjectMeta, "spec": cl.Spec, } }Honestly tho', my primary use-case is extracting data that was placed as annotations/labels on the
Clusterspecifically to configure the CRS that is placed into the remote cluster, so I could do without the Spec.The above data would result in templating that was more explicit,
{{ cluster.meta.Name }}for example to grab the cluster name, but could also use thelabelandannotationfunctions that are provided e.g.{{ annotation "example.com/testing" }}.If we went this route it's probably worth reducing the surface of issues like these by creating a new struct which contains values intended for templating.
Does the simplified data above work for you?
While I could "simplify" the
ObjectMetaandSpecfurther, it feels like the cost of maintaining the mapping from fields in theClusterto fields in the provided data map, would be significant (both technically, and from a UX perspective).
I would still worry about 'accidental API', and about some potential brain twisters during API upgrades. I was thinking something more like:
struct {
ClusterName string
ClusterAnnotations map[string]string
ClusterLabels map[string]string
}
New values can be added explicitly as required, and we're not accidentally adding anything we haven't thought about carefully in advance.
/ok-to-test
@bigkevmcd: PR needs rebase.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/lifecycle frozen /hold
let's keep this around until the corresponding proposal get's merged https://github.com/kubernetes-sigs/cluster-api/pull/6555
@fabriziopandini: The lifecycle/frozen label cannot be applied to Pull Requests.
In response to this:
/lifecycle frozen /hold
let's keep this around until the corresponding proposal get's merged https://github.com/kubernetes-sigs/cluster-api/pull/6555
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/remove-lifecycle rotten