karmada icon indicating copy to clipboard operation
karmada copied to clipboard

SchedulerObservedGeneration Not Updated When No Clusters Match LabelSelector

Open lxtywypc opened this issue 1 year ago • 1 comments

What happened:

When I using labelSelector in clusterAffinity and remove labels on cluster so that no clusters match the selector, the scheduling result in binding would be patched as nil and the works would be removed, but the schedulerObservedGeneration would not be updated.

What you expected to happen:

After the scheduling result being removed, the schedulerObservedGeneration should also be updated.

How to reproduce it (as minimally and precisely as possible):

  1. Prepare a propagationpolicy and a deployment matches the policy:

    apiVersion: policy.karmada.io/v1alpha1
    kind: PropagationPolicy
    metadata:
      name: test-pp
      namespace: test
    spec:
      placement:
        clusterAffinity:
          labelSelector:
            matchLabels:
              cluster-labels: ok
      resourceSelectors:
      - apiVersion: apps/v1
        kind: Deployment
        name: test-deploy
    
  2. Apply the policy and deployment, and label a cluster with cluster-labels=ok:

    karmada:~/karmada/test$ kubectl apply -f test-pp.yaml
    propagationpolicy.policy.karmada.io/test-pp created
    
    karmada:~/karmada/test$ kubectl apply -f deploy.yaml
    deployment.apps/test-deployment created
    
    karmada:~/karmada/test$ kubectl label cluster cluster-1 cluster-labels=ok
    cluster.cluster.karmada.io/cluster-1 labeled
    
    karmada:~/karmada/test$ kubectl get rb -n test test-deployment-deployment -o jsonpath='{.metadata.uid}{"\n"}'
    d1b2571e-be6d-47af-8e83-80941154a579
    
    karmada:~/karmada/test$ kubectl get work -A -l "resourcebinding.karmada.io/uid=d1b2571e-be6d-47af-8e83-80941154a579"
    NAMESPACE              NAME                        APPLIED   AGE
    karmada-es-cluster-1   test-deployment-db6788b48   True      14m
    

    Then get the resourcebinding:

    apiVersion: work.karmada.io/v1alpha2
    kind: ResourceBinding
    metadata:
      annotations:
        policy.karmada.io/applied-placement: '{"clusterAffinity":{"labelSelector":{"matchLabels":{"cluster-labels":"ok"}}},"clusterTolerations":[{"key":"cluster.karmada.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"cluster.karmada.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}]}'
        propagationpolicy.karmada.io/name: test-pp
        propagationpolicy.karmada.io/namespace: test
      creationTimestamp: "2023-11-16T06:15:16Z"
      finalizers:
      - karmada.io/binding-controller
      generation: 4
      labels:
        propagationpolicy.karmada.io/name: test-pp
        propagationpolicy.karmada.io/namespace: test
        propagationpolicy.karmada.io/uid: e16c5d5c-32b6-484a-990a-a8d48d61f7a2
      name: test-deployment-deployment
      namespace: test
      ownerReferences:
      - apiVersion: apps/v1
        blockOwnerDeletion: true
        controller: true
        kind: Deployment
        name: test-deployment
        uid: acfa5d48-d3f1-417b-bcb2-d965b12499f6
      resourceVersion: "910596169"
      selfLink: /apis/work.karmada.io/v1alpha2/namespaces/test/resourcebindings/test-deployment-deployment
      uid: d1b2571e-be6d-47af-8e83-80941154a579
    spec:
      clusters:
      - name: cluster-1
        replicas: 2
      placement:
        clusterAffinity:
          labelSelector:
            matchLabels:
              cluster-labels: ok
        clusterTolerations:
        - effect: NoExecute
          key: cluster.karmada.io/not-ready
          operator: Exists
          tolerationSeconds: 300
        - effect: NoExecute
          key: cluster.karmada.io/unreachable
          operator: Exists
          tolerationSeconds: 300
      replicaRequirements:
        resourceRequest:
          cpu: 100m
          memory: 1Gi
      replicas: 2
      resource:
        apiVersion: apps/v1
        kind: Deployment
        name: test-deployment
        namespace: test
        resourceVersion: "910595470"
        uid: acfa5d48-d3f1-417b-bcb2-d965b12499f6
      schedulerName: default-scheduler
    status:
      aggregatedStatus:
      - applied: true
        clusterName: cluster-1
        health: Unhealthy
        status:
          replicas: 2
          unavailableReplicas: 2
          updatedReplicas: 2
      conditions:
      - lastTransitionTime: "2023-11-16T06:15:51Z"
        message: Binding has been scheduled successfully.
        reason: Success
        status: "True"
        type: Scheduled
      - lastTransitionTime: "2023-11-16T06:15:51Z"
        message: All works have been successfully applied
        reason: FullyAppliedSuccess
        status: "True"
        type: FullyApplied
      schedulerObservedGeneration: 4
    
  3. Then change the labels on cluster to cluster-labels=ok:

    karmada:~/karmada/test$ kubectl label cluster cluster-1 cluster-labels=not_ok --overwrite
    cluster.cluster.karmada.io/cluster-1 labeled
    
    karmada:~/karmada/test$ kubectl get work -A -l "resourcebinding.karmada.io/uid=d1b2571e-be6d-47af-8e83-80941154a579"
    No resources found
    

    Then get the resourcebinding again:

    apiVersion: work.karmada.io/v1alpha2
    kind: ResourceBinding
    metadata:
      annotations:
        policy.karmada.io/applied-placement: '{"clusterAffinity":{"labelSelector":{"matchLabels":{"cluster-labels":"ok"}}},"clusterTolerations":[{"key":"cluster.karmada.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"cluster.karmada.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}]}'
        propagationpolicy.karmada.io/name: test-pp
        propagationpolicy.karmada.io/namespace: test
      creationTimestamp: "2023-11-16T06:15:16Z"
      finalizers:
      - karmada.io/binding-controller
      generation: 5
      labels:
        propagationpolicy.karmada.io/name: test-pp
        propagationpolicy.karmada.io/namespace: test
        propagationpolicy.karmada.io/uid: e16c5d5c-32b6-484a-990a-a8d48d61f7a2
      name: test-deployment-deployment
      namespace: test
      ownerReferences:
      - apiVersion: apps/v1
        blockOwnerDeletion: true
        controller: true
        kind: Deployment
        name: test-deployment
        uid: acfa5d48-d3f1-417b-bcb2-d965b12499f6
      resourceVersion: "910604985"
      selfLink: /apis/work.karmada.io/v1alpha2/namespaces/test/resourcebindings/test-deployment-deployment
      uid: d1b2571e-be6d-47af-8e83-80941154a579
    spec:
      placement:
        clusterAffinity:
          labelSelector:
            matchLabels:
              cluster-labels: ok
        clusterTolerations:
        - effect: NoExecute
          key: cluster.karmada.io/not-ready
          operator: Exists
          tolerationSeconds: 300
        - effect: NoExecute
          key: cluster.karmada.io/unreachable
          operator: Exists
          tolerationSeconds: 300
      replicaRequirements:
        resourceRequest:
          cpu: 100m
          memory: 1Gi
      replicas: 2
      resource:
        apiVersion: apps/v1
        kind: Deployment
        name: test-deployment
        namespace: test
        resourceVersion: "910595470"
        uid: acfa5d48-d3f1-417b-bcb2-d965b12499f6
      schedulerName: default-scheduler
    status:
      conditions:
      - lastTransitionTime: "2023-11-16T06:25:10Z"
        message: '0/2 clusters are available: 1 cluster(s) did not match the placement
          cluster affinity constraint, 1 cluster(s) had untolerated taint {cluster.karmada.io/not-ready:NoSchedule}.'
        reason: NoClusterFit
        status: "False"
        type: Scheduled
      - lastTransitionTime: "2023-11-16T06:15:51Z"
        message: All works have been successfully applied
        reason: FullyAppliedSuccess
        status: "True"
        type: FullyApplied
      schedulerObservedGeneration: 4
    

Anything else we need to know?:

Environment:

  • Karmada version:
  • kubectl-karmada or karmadactl version (the result of kubectl-karmada version or karmadactl version):
  • Others:

lxtywypc avatar Nov 16 '23 06:11 lxtywypc

Thank you for the clear description. In favor of #4251 /assign @lxtywypc

XiShanYongYe-Chang avatar Nov 17 '23 09:11 XiShanYongYe-Chang