cluster-api
cluster-api copied to clipboard
Fields that are taking URLs or OCI registry paths should validate for correctness
What steps did you take and what happened: [A clear and concise description on how to REPRODUCE the bug.]
Use a kubeadmControlPlane with the following
spec:
kubeadmConfigSpec:
clusterConfiguration:
dns:
imageRepository: |
registry.contoso.com/kubernetes
s
...
The image repository then has a new line character in it. This is currently treated as valid, but should be rejected, and the end result is that the user may not know they've something wrong until the KCP machine fails to boot.
What did you expect to happen:
Anywhere we are reading an OCI registry or a http URL, we should verify that these are valid for URI paths. There's parsing functions in the Go std library under URL as well as functions in docker/distribution/reference.
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
Environment:
- Cluster-api version: v1.2.2
- minikube/kind version: N/A
- Kubernetes version: (use
kubectl version): N/A - OS (e.g. from
/etc/os-release): N/A
/kind bug [One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels]
/triage accepted
If you end up doing something custom because library functions don't do the right thing, it'd be good to have a utility function in one of the externally exposed packages as I can imagine it being useful to runtime extension authors.
Q: does this qualifies for backport? I'm generally +1 because it is a bug and the fix can avoid problems to users, but at the same time it could potentially break some users (I have to check for impacted fields/use cases to confirm first)
If all cases that the validation additionally blocks are cases that wouldn't work anyway (like the new line), it wouldn't block any working setups for users.
that's ok, we can bring this to the office hours if there is demand for this backport, but I agree this should not block the users (but it is worth a note in the release notes)
can I work on this issue? sorry newbie question .. Does this needs to be fixed at kubeadm and cluster api , or cluster API uses kubeadm validation once it implements?
The idea is to implement this in ClusterAPI validation webhooks in addition to anything else that might be implemented in kubeadm (to get the validation errors already from the webhook).
It's pretty much TBD at this point for which fields we want to implement which validation exactly (except imageRepository which is already pretty clear). So first step is doing an audit of all our v1beta1 API types so we can decide on which fields we want to add validation
@sbueringer Thanks for clarifying and adding more context. I'll look for more beginner friendly issues then :) . Thanks.
Stumbled in https://github.com/kubernetes-sigs/cluster-api/blob/62e6600244ee7503eefba4db56ef7e94773b8314/util/container/image.go#L45 (code where we are using docker library to parse image names); It could be a good starting point to look for validation func for repository only
Yup that's why Naadir mentioned docker/distribution/reference (as we're using utils from there and there are more).
Additional note: Our issue was that we had a trailing \n after an absolutely valid image repo name. Would be good to make sure that cases like this are captured by the validation.
/help
@fabriziopandini: This request has been marked as needing help from a contributor.
Guidelines
Please ensure that the issue body includes answers to the following questions:
- Why are we solving this issue?
- To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
- Does this issue have zero to low barrier of entry?
- How can the assignee reach out to you for help?
For more details on the requirements of such an issue, please see here and ensure that they are met.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.
In response to this:
/help
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/lifecycle frozen
This issue has not been updated in over 1 year, and should be re-triaged.
You can:
- Confirm that this issue is still relevant with
/triage accepted(org members only) - Close this issue with
/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
/triage accepted /priority backlog