cluster-api-provider-openstack icon indicating copy to clipboard operation
cluster-api-provider-openstack copied to clipboard

CAPO need delete any timeout resources or create fail

Open tranthang2404 opened this issue 2 years ago • 6 comments

/kind bug

What steps did you take and what happened: When I has experience with CAPI and CAPO, I created some clusters but it isn't smooth.

Some usecases I got:

  • Fail to create LB and CAPO wait the status of LB and don't recreate it ==> Cluster cannot provision
  • Timeout to create VM and CAPO don't check it, CAPO only throw an error to status OpenstackMachine and not recreate or delete this error VM.
  • An usecase like issue: https://github.com/kubernetes-sigs/cluster-api-provider-openstack/issues/1254

What did you expect to happen: CAPO need more processing to create successful cluster

Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]

Environment:

  • Cluster API Provider OpenStack version (Or git rev-parse HEAD if manually built):
  • Cluster-API version:
  • OpenStack version:
  • Minikube/KIND version:
  • Kubernetes version (use kubectl version):
  • OS (e.g. from /etc/os-release):

tranthang2404 avatar Jun 09 '22 04:06 tranthang2404

so it's a umbrella issue for all the issue you listed?

Fail to create LB and CAPO wait the status of LB and don't recreate it ==> Cluster cannot provision
Timeout to create VM and CAPO don't check it, CAPO only throw an error to status OpenstackMachine and not recreate or delete this error VM.

I think those 2 are valid cases , we agreed on check specific error e.g we know it will definitely fail so we won't reconcile on such case to do useless retry on such resources

so what's the error you are talking about above , Timeout so it worthy retry?

jichenjc avatar Jun 13 '22 09:06 jichenjc

We have an issue for the CAPO VM Retries here: #1116. For further problems you might be able to leverage CAPI MachineHealthChecks for recreating machines if they do not become ready within some timeout.

apricote avatar Jun 15 '22 14:06 apricote

so it's a umbrella issue for all the issue you listed?

Fail to create LB and CAPO wait the status of LB and don't recreate it ==> Cluster cannot provision
Timeout to create VM and CAPO don't check it, CAPO only throw an error to status OpenstackMachine and not recreate or delete this error VM.

I think those 2 are valid cases , we agreed on check specific error e.g we know it will definitely fail so we won't reconcile on such case to do useless retry on such resources

so what's the error you are talking about above , Timeout so it worthy retry?

Because I can't provision a Cluster with some timeout LB, VMs so I want to know a solution to make a ready cluster with one file manifest without do something.

Now, I see MachineHealthChecks can help to recreate the VM but it cannot recreate the LB.

tranthang2404 avatar Jun 16 '22 03:06 tranthang2404

We have an issue for the CAPO VM Retries here: #1116. For further problems you might be able to leverage CAPI MachineHealthChecks for recreating machines if they do not become ready within some timeout.

Thanks, How about recreate LB. Does It have the same solutions?

tranthang2404 avatar Jun 16 '22 03:06 tranthang2404

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Sep 14 '22 04:09 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Oct 14 '22 04:10 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Nov 13 '22 05:11 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Nov 13 '22 05:11 k8s-ci-robot