cluster-api-provider-aws icon indicating copy to clipboard operation
cluster-api-provider-aws copied to clipboard

some MachinePools keep status deleting forever

Open Julian-Chu opened this issue 2 years ago • 4 comments
trafficstars

/kind bug

What steps did you take and what happened: We use gitops with argocd to build PoC to create and clean cluster by PR for testing. we found some clusters and machine pools keep deleting, but the aws resources have been removed.

// cluster
NAME                   PHASE         AGE    VERSION
review-cluster-pr-14   Deleting      4d5h
review-cluster-pr-16   Deleting      4d
// machinepool
review-cluster-pr-14-pool-0   review-cluster-pr-14   3          Deleting   4d5h
review-cluster-pr-16-pool-0   review-cluster-pr-16   1          Deleting   4d
review-cluster-pr-5-pool-0    review-cluster-pr-5    3          Deleting   7d5h
review-cluster-pr-9-pool-0    review-cluster-pr-9    3          Deleting   7d5h

I only see error message in review-cluster-pr-5-pool-0.

 Failure Message:         MachinePool infrastructure resource infrastructure.cluster.x-k8s.io/v1beta2, Kind=AWSManagedMachinePool with name "review-cluster-pr-5-pool-0" has been deleted after being ready
  Failure Reason:          InvalidConfiguration

What did you expect to happen: the k8s resources should be removed correctly.

Anything else you would like to add: log provided by @arjunrn pr 5,9 -- seem the cluster is deleted before machinepool

capi-controller-manager-59f96c6567-lwbnx manager E0616 14:40:04.615784       1 machinepool_controller.go:129] "Failed to get Cluster for MachinePool." err="failed to get Cluster/review-cluster-pr-9: Cluster.cluster.x-k8s.io \"review-cluster-pr-9\" not found" controller="machinepool" controllerGroup="cluster.x-k8s.io" controllerKind="MachinePool" namespace="dh-cap-dev" name="review-cluster-pr-9-pool-0" reconcileID=97eb71ac-ab04-4ff0-a29b-48e3ee075aa5 MachinePool="dh-cap-dev/review-cluster-pr-9-pool-0" Cluster="dh-cap-dev/review-cluster-pr-9"
capi-controller-manager-59f96c6567-lwbnx manager E0616 14:40:04.615984       1 controller.go:329] "Reconciler error" err="failed to get cluster \"review-cluster-pr-9\" for machinepool \"review-cluster-pr-9-pool-0\" in namespace \"dh-cap-dev\": failed to get Cluster/review-cluster-pr-9: Cluster.cluster.x-k8s.io \"review-cluster-pr-9\" not found" controller="machinepool" controllerGroup="cluster.x-k8s.io" controllerKind="MachinePool" MachinePool="dh-cap-dev/review-cluster-pr-9-pool-0" namespace="dh-cap-dev" name="review-cluster-pr-9-pool-0" reconcileID=97eb71ac-ab04-4ff0-a29b-48e3ee075aa5
capi-controller-manager-59f96c6567-lwbnx manager E0616 14:40:04.616074       1 controller.go:329] "Reconciler error" err="failed to get cluster \"review-cluster-pr-5\" for machinepool \"review-cluster-pr-5-pool-0\" in namespace \"dh-cap-dev\": failed to get Cluster/review-cluster-pr-5: Cluster.cluster.x-k8s.io \"review-cluster-pr-5\" not found" controller="machinepool" controllerGroup="cluster.x-k8s.io" controllerKind="MachinePool" MachinePool="dh-cap-dev/review-cluster-pr-5-pool-0" namespace="dh-cap-dev" name="review-cluster-pr-5-pool-0" reconcileID=86a3203c-f7f7-4595-95d0-8c56730ed26b
capi-controller-manager-59f96c6567-lwbnx manager E0616 14:40:04.735442       1 controller.go:329] "Reconciler error" err="failed to retrieve kubeconfig secret for Cluster dh-cap-dev/review-cluster-pr-16: secrets \"review-cluster-pr-16-kubeconfig\" not found" controller="machinepool" controllerGroup="cluster.x-k8s.io" controllerKind="MachinePool" MachinePool="dh-cap-dev/review-cluster-pr-16-pool-0" namespace="dh-cap-dev" name="review-cluster-pr-16-pool-0" reconcileID=7cbbef32-2fec-45bf-84ed-985559c1bdb5
capi-controller-manager-59f96c6567-lwbnx manager E0616 14:40:04.742310       1 controller.go:329] "Reconciler error" err="failed to retrieve kubeconfig secret for Cluster dh-cap-dev/review-cluster-pr-14: secrets \"review-cluster-pr-14-kubeconfig\" not found" controller="machinepool" controllerGroup="cluster.x-k8s.io" controllerKind="MachinePool" MachinePool="dh-cap-dev/review-cluster-pr-14-pool-0" namespace="dh-cap-dev" name="review-cluster-pr-14-pool-0" reconcileID=39811052-25b5-4bf5-929b-4fd8058850af

pr 14,16

E0616 14:41:21.572118       1 controller.go:329] "Reconciler error" err="failed to retrieve kubeconfig secret for Cluster dh-cap-dev/review-cluster-pr-16: secrets \"review-cluster-pr-16-kubeconfig\" not found" controller="machinepool" controllerGroup="cluster.x-k8s.io" controllerKind="MachinePool" MachinePool="dh-cap-dev/review-cluster-pr-16-pool-0" namespace="dh-cap-dev" name="review-cluster-pr-16-pool-0" reconcileID=ea8f5bdb-9575-4bcf-9e7e-4a6f18e314a5
E0616 14:41:21.576063       1 controller.go:329] "Reconciler error" err="failed to retrieve kubeconfig secret for Cluster dh-cap-dev/review-cluster-pr-14: secrets \"review-cluster-pr-14-kubeconfig\" not found" controller="machinepool" controllerGroup="cluster.x-k8s.io" controllerKind="MachinePool" MachinePool="dh-cap-dev/review-cluster-pr-14-pool-0" namespace="dh-cap-dev" name="review-cluster-pr-14-pool-0" reconcileID=add7af64-f0b7-4ee5-8258-c5d3d35d2aed
I0616 14:41:24.527682       1 cluster_controller.go:241] "Cluster still has children - deleting them first" controller="cluster" controllerGroup="cluster.x-k8s.io" controllerKind="Cluster" Cluster="dh-cap-dev/review-cluster-pr-14" namespace="dh-cap-dev" name="review-cluster-pr-14" reconcileID=8ad7badb-e7bb-4e4b-9eb3-86f91c60c27c count=1
I0616 14:41:24.527719       1 cluster_controller.go:267] "Cluster still has descendants - need to requeue" controller="cluster" controllerGroup="cluster.x-k8s.io" controllerKind="Cluster" Cluster="dh-cap-dev/review-cluster-pr-14" namespace="dh-cap-dev" name="review-cluster-pr-14" reconcileID=8ad7badb-e7bb-4e4b-9eb3-86f91c60c27c descendants="Machine pools: review-cluster-pr-14-pool-0" indirect descendants count=0
I0616 14:41:24.532685       1 cluster_controller.go:241] "Cluster still has children - deleting them first" controller="cluster" controllerGroup="cluster.x-k8s.io" controllerKind="Cluster" Cluster="dh-cap-dev/review-cluster-pr-16" namespace="dh-cap-dev" name="review-cluster-pr-16" reconcileID=2830c6b7-92b6-4325-8c4e-d1985f792c1b count=1
I0616 14:41:24.532720       1 cluster_controller.go:267] "Cluster still has descendants - need to requeue" controller="cluster" controllerGroup="cluster.x-k8s.io" controllerKind="Cluster" Cluster="dh-cap-dev/review-cluster-pr-16" namespace="dh-cap-dev" name="review-cluster-pr-16" reconcileID=2830c6b7-92b6-4325-8c4e-d1985f792c1b descendants="Machine pools: review-cluster-pr-16-pool-0" indirect descendants count=0

Environment:

  • Cluster-api-provider-aws version: v2.1.3
  • Kubernetes version: (use kubectl version): 1.27 (eks)
  • OS (e.g. from /etc/os-release):

Julian-Chu avatar Jun 16 '23 14:06 Julian-Chu

This issue is currently awaiting triage.

If CAPA/CAPI contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Jun 16 '23 14:06 k8s-ci-robot

Just had the same issue while deleting a cluster. Removing the finalizers on the machinepools.cluster.x-k8s.io object, removes the machinepool.

cablunar avatar Jul 14 '23 11:07 cablunar

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jan 24 '24 11:01 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Feb 23 '24 12:02 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Mar 24 '24 12:03 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Mar 24 '24 12:03 k8s-ci-robot