cluster-api icon indicating copy to clipboard operation
cluster-api copied to clipboard

🌱 Add BootstrapFailedMachineError error

Open mcbenjemaa opened this issue 1 year ago • 10 comments

What this PR does / why we need it:

In some infra providers, we need more configuration to provision a cluster. For example, cloud-init config and control plane endpoint.

Since there are no validations for those configs, the Bootstrap (cloud-init) may fail due to misconfiguration, and we need to figure out why.

Having a new machine error reason will give clear idea what's happening to users

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged): Fixes #

/area machine

mcbenjemaa avatar Apr 02 '24 12:04 mcbenjemaa

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: Once this PR has been reviewed and has the lgtm label, please assign enxebre for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment Approvers can cancel approval by writing /approve cancel in a comment

k8s-ci-robot avatar Apr 02 '24 12:04 k8s-ci-robot

/area machine

mcbenjemaa avatar Apr 02 '24 12:04 mcbenjemaa

/ok-to-test

mcbenjemaa avatar Apr 02 '24 12:04 mcbenjemaa

@mcbenjemaa this would be very useful. These kind of errors though might happen after the iaas considers an instance started, in which case they are not surfaced/exposed in any immediate consumable way. Can you articulate an example of how such a failure would be detected and bubble up here?

To make sure we approach this holistically and come up with the most valuable approach, we might want to start by writing down some failure scenarios and how they would be surfaced. FWIW see previous related efforts https://github.com/kubernetes-sigs/cluster-api/issues/2554

enxebre avatar May 31 '24 13:05 enxebre

I'm Using CAPI provider for Proxmox, And I need a way of detecting if the bootstrap fails.

With Proxmox im calling the API to check the status of cloud-init.

And based on that, will mark a machine as Bootstrap Failed, So the user knows that it's failing due to the bootstrap data.

mcbenjemaa avatar May 31 '24 13:05 mcbenjemaa

I'd expect such kind of failures to be captured in a condition, that then we signal as permanent failure. See previous efforts https://github.com/kubernetes-sigs/cluster-api/pull/6218

At the lack of a mechanism for the above, should this failure be captured in the ProxmoxMachine Ready condition? that would then be bubbled up to the Machine?

enxebre avatar Jun 03 '24 15:06 enxebre

In my case, If machine failed to provision, its considered as a provision failure.

I don't know whether it's good to have it in conditions.

But in case of control planes, the cluster should be marked failed i guess.

mcbenjemaa avatar Jun 03 '24 16:06 mcbenjemaa

Can we get this merged so we can rely on this condition?

mcbenjemaa avatar Jun 18 '24 11:06 mcbenjemaa

The /errors package has its origin in when capi providers were machineActuators that needed to vendor core capi to function. There's no usage recommendations and value is questionable since we moved to CRDs and conditions for interoperability between core and providers. I think we should deprecate it and if there's any use case relying on it we should support it via conditions I captured this here https://github.com/kubernetes-sigs/cluster-api/issues/10784

Would using a condition/reason be sufficient for your use case https://github.com/kubernetes-sigs/cluster-api/pull/10360#issuecomment-2145548209?

enxebre avatar Jun 19 '24 13:06 enxebre

/hold given the above

vincepri avatar Jun 24 '24 04:06 vincepri

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Sep 22 '24 05:09 k8s-triage-robot

As we deprecated the entire package in https://github.com/kubernetes-sigs/cluster-api/issues/10784, we should not add new constants to it now

/close

sbueringer avatar Sep 23 '24 09:09 sbueringer

@sbueringer: Closed this PR.

In response to this:

As we deprecated the entire package in https://github.com/kubernetes-sigs/cluster-api/issues/10784, we should not add new constants to it now

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Sep 23 '24 09:09 k8s-ci-robot