karmada icon indicating copy to clipboard operation
karmada copied to clipboard

failover feature-gate Cannot be closed correctly

Open kubepopeye opened this issue 1 year ago • 13 comments

Please provide an in-depth description of the question you have:

I don't want karmada to trigger a failover when the cluster is unreachable, I tried to disable the feature-gate directly in karmada-controller and found that the failover still occurs!

What do you think about this question?: I went to look at the karmada implementation, the cluster-controller does add a judgement for failover feature-gate in the monitor place, but in the ttaintClusterByCondition method, there is a lack of judgement, which leads to the taint being hit, and ultimately leads to feature-gate failover feature-gate.

Environment:

  • Karmada version:
  • v1.8.0
  • Kubernetes version:
  • 1.25
  • Others:

kubepopeye avatar Aug 15 '24 03:08 kubepopeye

taintClusterByCondition only adds NoSchedule taints, which only affect scheduling.

whitewindmills avatar Aug 15 '24 06:08 whitewindmills

企业微信截图_ca74a2d1-ef4f-467e-8c83-00554c41388d

kubepopeye avatar Aug 15 '24 07:08 kubepopeye

So what's causing this problem and can you help answer? It's true that it's noSchedule. but it's still triggering the clearing of the orphaned work.

@whitewindmills

kubepopeye avatar Aug 15 '24 07:08 kubepopeye

the orphan works may be caused by multiple reasons. I cannot find the root cause by those comments. can you paste scheduler logs here?

whitewindmills avatar Aug 15 '24 07:08 whitewindmills

scheduler logs? I found that it seems to remove the resourcebind spec cluster only in the case of taint, which ends up causing findOrphan to be able to be found there.

kubepopeye avatar Aug 15 '24 08:08 kubepopeye

since you have disabled the failover feature, but karmada-scheduler might change their schduling result.

whitewindmills avatar Aug 16 '24 05:08 whitewindmills

since you have disabled the failover feature, but karmada-scheduler might change their schduling result.

Is this the expected correct behaviour, I've found that in some cases it can lead to an empty list of Cluster's APIEnablements, which ends up with disastrous consequences!

kubepopeye avatar Aug 16 '24 09:08 kubepopeye

no, but we're fixing it. ref: https://github.com/karmada-io/karmada/pull/5325 https://github.com/karmada-io/karmada/pull/5216

whitewindmills avatar Aug 16 '24 09:08 whitewindmills

I've found that in some cases it can lead to an empty list of Cluster's APIEnablements, which ends up with disastrous consequences!

have u ensured that's your root cause?

whitewindmills avatar Aug 16 '24 09:08 whitewindmills

I've found that in some cases it can lead to an empty list of Cluster's APIEnablements, which ends up with disastrous consequences!

have u ensured that's your root cause?

Yes, it seems to me that the shutdown of failover means that no migration of availability zones should take place, but it seems that there are some features here that cause failover-like behaviour to take place nonetheless.

kubepopeye avatar Aug 16 '24 09:08 kubepopeye

I've found that in some cases it can lead to an empty list of Cluster's APIEnablements, which ends up with disastrous consequences!

have u ensured that's your root cause?

If we are unlucky, the cluster-status-controller will clear the apiEnablements in the cluster status when the cluster goes offline, then the scheduler will step in and find no matching APIs, which in turn will cause rb's specification cluster to be cleared, and finally the binding controller will removeOrphan, causing the Our downstream resources are removed. The binding controller's removeOrphan causes our downstream resources to be deleted. This is the complete chain, so we still consider the failover implementation to be incomplete.

kubepopeye avatar Aug 16 '24 09:08 kubepopeye

first, there's nothing to do with FAILOVER. did you see your cluster failing? it's important, wrong APIENABLEMENTS are usually caused by network errors or failing APIService.

todo: grep such logs in your karmada-controller-manager.

  • Failed to get any APIs installed in Cluster
  • Maybe get partial if you find them, that's the case.

whitewindmills avatar Aug 16 '24 10:08 whitewindmills

  • Failed to get any APIs installed in Cluster

yes,Failed to get any APIs installed in Cluster

企业微信截图_b69678f0-cbb8-4242-9a80-b414da0d8da9 image

kubepopeye avatar Aug 16 '24 10:08 kubepopeye

Hi @kubepopeye, thanks for your response,

According to the log information, your analysis is correct, and we noticed this problem and fixed it in v1.12, just as @whitewindmills said, in terms of karmada-controller-manager, we added CompleteAPIEnablements for cluster status. On the scheduler side, we handled the cluster CompleteAPIEnablements Condition.

Now this problem should have been fixed, can you help make sure?

XiShanYongYe-Chang avatar Dec 31 '24 04:12 XiShanYongYe-Chang

Due to the long-term lack of activity, we plan to close this issue first. If you still encounter similar problems, please feel free to reopen this issue. Thanks to all of you! /close

XiShanYongYe-Chang avatar Mar 19 '25 03:03 XiShanYongYe-Chang

@XiShanYongYe-Chang: Closing this issue.

In response to this:

Due to the long-term lack of activity, we plan to close this issue first. If you still encounter similar problems, please feel free to reopen this issue. Thanks to all of you! /close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

karmada-bot avatar Mar 19 '25 03:03 karmada-bot