🌱 Add BootstrapFailedMachineError error
What this PR does / why we need it:
In some infra providers, we need more configuration to provision a cluster. For example, cloud-init config and control plane endpoint.
Since there are no validations for those configs, the Bootstrap (cloud-init) may fail due to misconfiguration, and we need to figure out why.
Having a new machine error reason will give clear idea what's happening to users
Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #
/area machine
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: Once this PR has been reviewed and has the lgtm label, please assign enxebre for approval. For more information see the Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
/area machine
/ok-to-test
@mcbenjemaa this would be very useful. These kind of errors though might happen after the iaas considers an instance started, in which case they are not surfaced/exposed in any immediate consumable way. Can you articulate an example of how such a failure would be detected and bubble up here?
To make sure we approach this holistically and come up with the most valuable approach, we might want to start by writing down some failure scenarios and how they would be surfaced. FWIW see previous related efforts https://github.com/kubernetes-sigs/cluster-api/issues/2554
I'm Using CAPI provider for Proxmox, And I need a way of detecting if the bootstrap fails.
With Proxmox im calling the API to check the status of cloud-init.
And based on that, will mark a machine as Bootstrap Failed, So the user knows that it's failing due to the bootstrap data.
I'd expect such kind of failures to be captured in a condition, that then we signal as permanent failure. See previous efforts https://github.com/kubernetes-sigs/cluster-api/pull/6218
At the lack of a mechanism for the above, should this failure be captured in the ProxmoxMachine Ready condition? that would then be bubbled up to the Machine?
In my case, If machine failed to provision, its considered as a provision failure.
I don't know whether it's good to have it in conditions.
But in case of control planes, the cluster should be marked failed i guess.
Can we get this merged so we can rely on this condition?
The /errors package has its origin in when capi providers were machineActuators that needed to vendor core capi to function. There's no usage recommendations and value is questionable since we moved to CRDs and conditions for interoperability between core and providers. I think we should deprecate it and if there's any use case relying on it we should support it via conditions I captured this here https://github.com/kubernetes-sigs/cluster-api/issues/10784
Would using a condition/reason be sufficient for your use case https://github.com/kubernetes-sigs/cluster-api/pull/10360#issuecomment-2145548209?
/hold given the above
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
As we deprecated the entire package in https://github.com/kubernetes-sigs/cluster-api/issues/10784, we should not add new constants to it now
/close
@sbueringer: Closed this PR.
In response to this:
As we deprecated the entire package in https://github.com/kubernetes-sigs/cluster-api/issues/10784, we should not add new constants to it now
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.