cluster-api-provider-openstack
cluster-api-provider-openstack copied to clipboard
CAPO need delete any timeout resources or create fail
/kind bug
What steps did you take and what happened: When I has experience with CAPI and CAPO, I created some clusters but it isn't smooth.
Some usecases I got:
- Fail to create LB and CAPO wait the status of LB and don't recreate it ==> Cluster cannot provision
- Timeout to create VM and CAPO don't check it, CAPO only throw an error to status OpenstackMachine and not recreate or delete this error VM.
- An usecase like issue: https://github.com/kubernetes-sigs/cluster-api-provider-openstack/issues/1254
What did you expect to happen: CAPO need more processing to create successful cluster
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
Environment:
- Cluster API Provider OpenStack version (Or
git rev-parse HEAD
if manually built): - Cluster-API version:
- OpenStack version:
- Minikube/KIND version:
- Kubernetes version (use
kubectl version
): - OS (e.g. from
/etc/os-release
):
so it's a umbrella issue for all the issue you listed?
Fail to create LB and CAPO wait the status of LB and don't recreate it ==> Cluster cannot provision
Timeout to create VM and CAPO don't check it, CAPO only throw an error to status OpenstackMachine and not recreate or delete this error VM.
I think those 2 are valid cases , we agreed on check specific error e.g we know it will definitely fail so we won't reconcile on such case to do useless retry on such resources
so what's the error you are talking about above , Timeout
so it worthy retry?
We have an issue for the CAPO VM Retries here: #1116. For further problems you might be able to leverage CAPI MachineHealthChecks for recreating machines if they do not become ready within some timeout.
so it's a umbrella issue for all the issue you listed?
Fail to create LB and CAPO wait the status of LB and don't recreate it ==> Cluster cannot provision Timeout to create VM and CAPO don't check it, CAPO only throw an error to status OpenstackMachine and not recreate or delete this error VM.
I think those 2 are valid cases , we agreed on check specific error e.g we know it will definitely fail so we won't reconcile on such case to do useless retry on such resources
so what's the error you are talking about above ,
Timeout
so it worthy retry?
Because I can't provision a Cluster with some timeout LB, VMs so I want to know a solution to make a ready cluster with one file manifest without do something.
Now, I see MachineHealthChecks can help to recreate the VM but it cannot recreate the LB.
We have an issue for the CAPO VM Retries here: #1116. For further problems you might be able to leverage CAPI MachineHealthChecks for recreating machines if they do not become ready within some timeout.
Thanks, How about recreate LB. Does It have the same solutions?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.