volcano icon indicating copy to clipboard operation
volcano copied to clipboard

with preempt or reclaim plugin, the high priority pod can not be placed at some node which meet the conditions for preemption

Open LivingCcj opened this issue 1 year ago • 16 comments

when volcano scheduler open preempt or reclaim plugin,the high prioriry pod is unable to preempt the low priority pod. Although there are some nodes that meet the preemption conditions,beacuse one function in these predicateFns return err (is not nil), the potential node will be ignore https://github.com/volcano-sh/volcano/blob/94c62a4b73c71bd263e8ae770c694343b6007380/pkg/scheduler/actions/preempt/preempt.go#L211-L221

Environment:

  • Volcano Version: volcano v1.8.2
  • Kubernetes version (use kubectl version): v1.20.15

LivingCcj avatar Feb 26 '24 11:02 LivingCcj

Would you please supply more informations, such as scheduler configmap, and scheduler logs and jobs config?

lowang-bh avatar Feb 28 '24 02:02 lowang-bh

when preempt or reclaim, if one predicate function handler return status with Unschedulable state,but the err is not nil, it will ignore the potential node. According to the comments Allows scheduling to nodes that are in Success or Unschedulable state after filtering by predicate in function predicateFn , the node could be preempted here is the volcano scheduler configmap:

apiVersion: v1
data:
  volcano-scheduler.conf: |
    actions: "enqueue, allocate, preempt, backfill"
    tiers:
    - plugins:
      - name: priority
      - name: gang
        enablePreemptable: false
      - name: conformance
      - name: cdp
    - plugins:
      - name: drf
        enablePreemptable: false
      - name: predicates
      - name: nodeorder
        arguments:
          nodeaffinity.weight: 5
      - name: binpack
        arguments:
          binpack.weight: 5
          binpack.cpu: 2
          binpack.memory: 1
kind: ConfigMap
metadata:
  name: volcano-scheduler-configmap
  namespace: volcano-system

LivingCcj avatar Feb 28 '24 04:02 LivingCcj

but the err is not nil

What is the error?

lowang-bh avatar Mar 01 '24 02:03 lowang-bh

There is a scene, an unscheduled pod with gpu resources is in the session of preempt action. one node have some lower priority pods,if it preempts the lower priority pod,the unscheduled pod could be placed at the node. However at predicate stage, the predicateStatus.Code is Unschedulable and err is not nil (refer to the code below). It leads to ignoring the potential node when filtering preempted nodes. https://github.com/volcano-sh/volcano/blob/6e9f4f6b699b5bebdf7dbc4ed13668034456a90b/pkg/scheduler/plugins/predicates/predicates.go#L530-L554

LivingCcj avatar Mar 01 '24 10:03 LivingCcj

There is a scene, an unscheduled pod with gpu resources is in the session of preempt action. one node have some lower priority pods,if it preempts the lower priority pod,the unscheduled pod could be placed at the node. However at predicate stage, the predicateStatus.Code is Unschedulable and err is not nil (refer to the code below). It leads to ignoring the potential node when filtering preempted nodes.

https://github.com/volcano-sh/volcano/blob/6e9f4f6b699b5bebdf7dbc4ed13668034456a90b/pkg/scheduler/plugins/predicates/predicates.go#L530-L554

It's truly a problem in vgpu preemption, I think we should not reuturn err when vgpu resource insufficient here, if you're interested, welcome to fix that.

Monokaix avatar Mar 19 '24 03:03 Monokaix

Same problem: https://github.com/volcano-sh/volcano/issues/3186. We can fix it to resolve both of them.

Monokaix avatar Mar 19 '24 03:03 Monokaix

@LivingCcj @lowang-bh You're welcome to fix this: )

Monokaix avatar Mar 19 '24 03:03 Monokaix

This phenomenon has recurred when vgpu resource is insufficient. Here are volcano scheduler logs:

I0319 02:51:05.281886       1 preempt.go:43] Enter Preempt ...
I0319 02:51:05.281895       1 job_info.go:728] job podgroup-f354bb74-7c3d-4429-aa92-3c02a7ab99ba/kubeflow actual: map[:1], ji.TaskMinAvailable: map[]
I0319 02:51:05.281913       1 preempt.go:65] Added Queue <default> for Job <kubeflow/podgroup-f354bb74-7c3d-4429-aa92-3c02a7ab99ba>
I0319 02:51:05.281925       1 job_info.go:728] job podgroup-ccffee3d-b1e2-4a94-9f4d-f15502dc3f77/kubeflow actual: map[:1], ji.TaskMinAvailable: map[]
I0319 02:51:05.281942       1 job_info.go:728] job podgroup-6ba7f409-d8c0-498f-a2ec-ec7b0c7f75fc/kubeflow actual: map[:1], ji.TaskMinAvailable: map[]
I0319 02:51:05.281973       1 predicates.go:384] pod(kubeflow/x-v1-76d645bc8c-8sr2m) affinity require information is nil, plugin InterPodAffinity is skipped
I0319 02:51:05.282004       1 predicate_helper.go:55] Considering Task <kubeflow/x-v1-76d645bc8c-8sr2m> on node <10.x.x.x>: <cpu 1000.00, memory 4294967296.00, volcano.sh/vgpu-number 2000.00> vs. <cpu 2750.00, memory 8095842304.00, ephemeral-storage 38644306266000.00, hugepages-1Gi 0.00, hugepages-2Mi 0.00>
I0319 02:51:05.282013       1 predicate_helper.go:55] Considering Task <kubeflow/x-v1-76d645bc8c-8sr2m> on node <10.x.x.x>: <cpu 1000.00, memory 4294967296.00, volcano.sh/vgpu-number 2000.00> vs. <cpu 28530.00, memory 126106681344.00, hugepages-2Mi 0.00, nstack/vcuda-core 0.00, nstack/vcuda-memory 0.00, nvidia.com/gpu 2000.00, volcano.sh/vgpu-number 17000.00, ephemeral-storage 482947890401000.00, hugepages-1Gi 0.00>
I0319 02:51:05.282064       1 predicate_helper.go:75] Predicates failed for task <kubeflow/x-v1-76d645bc8c-8sr2m> on node <10.x.x.x>: task kubeflow/x-v1-76d645bc8c-8sr2m on node 10.x.x.x fit failed: plugin TaintToleration predicates failed node(s) had untolerated taint {node-role.kubernetes.io/controlplane: true}
I0319 02:51:05.282078       1 predicates.go:505] pod(kubeflow/x-v1-76d645bc8c-8sr2m) affinity require information is nil, plugin InterPodAffinity is skip for node 10.x.x.x
I0319 02:51:05.282105       1 csi.go:210] "Could not find a CSI driver name or volume handle, not counting volume"
I0319 02:51:05.282125       1 device_info.go:152] DeviceSharing:Into FitInPod x-v1-76d645bc8c-8sr2m
I0319 02:51:05.282136       1 device_info.go:167] DeviceSharing:FitInPod successed
I0319 02:51:05.282143       1 device_info.go:183] 4pdvgpu DeviceSharing starts filtering pods x-v1-76d645bc8c-8sr2m
I0319 02:51:05.282153       1 utils.go:256] counts= [{2 NVIDIA 10240 101 0}]
I0319 02:51:05.282178       1 utils.go:350] Allocating device for container request {2 NVIDIA 10240 101 0}
I0319 02:51:05.282201       1 utils.go:353] Scoring pod 10240:101:0:2i1device:1
I0319 02:51:05.282223       1 utils.go:354] gs 1 = 11441 10250 2
I0319 02:51:05.282244       1 utils.go:353] Scoring pod 10240:101:0:2i0device:0
I0319 02:51:05.282268       1 utils.go:354] gs 0 = 11441 10240 1
E0319 02:51:05.282285      1 device_info.go:187] deviceSharing err= not enough gpu fitted on this node
I0319 02:51:05.282306       1 predicate_helper.go:75] Predicates failed for task <kubeflow/x-v1-76d645bc8c-8sr2m> on node <10.x.x.x>: task kubeflow/x-v1-76d645bc8c-8sr2m on node 10.x.x.x fit failed: not enough gpu fitted on this node

Vital information:device_info.go:187] deviceSharing err= not enough gpu fitted on this node

Monokaix avatar Mar 19 '24 07:03 Monokaix

There is a scene, an unscheduled pod with gpu resources is in the session of preempt action. one node have some lower priority pods,if it preempts the lower priority pod,the unscheduled pod could be placed at the node. However at predicate stage, the predicateStatus.Code is Unschedulable and err is not nil (refer to the code below). It leads to ignoring the potential node when filtering preempted nodes. https://github.com/volcano-sh/volcano/blob/6e9f4f6b699b5bebdf7dbc4ed13668034456a90b/pkg/scheduler/plugins/predicates/predicates.go#L530-L554

It's truly a problem in vgpu preemption, I think we should not reuturn err when vgpu resource insufficient here, if you're interested, welcome to fix that.

@archlitchi is owned and familar with the vgpu code. @Monokaix

lowang-bh avatar Mar 24 '24 08:03 lowang-bh

I might experience a similar issue. My cluster has 4 GPU nodes. First, I start a 4-nodes job with low priority, which gets scheduled and running. A little later I start two 2-nodes jobs with high priority. I would expect that the high priority jobs would preempt the first job, but it doesn't happen. Please refer to the attached files. volcano.zip /cc @k82cn

dmitsh avatar May 08 '24 00:05 dmitsh

I might experience a similar issue. My cluster has 4 GPU nodes. First, I start a 4-nodes job with low priority, which gets scheduled and running. A little later I start two 2-nodes jobs with high priority. I would expect that the high priority jobs would preempt the first job, but it doesn't happen. Please refer to the attached files. volcano.zip /cc @k82cn

Maybe you can provide some logs: )

Monokaix avatar May 09 '24 01:05 Monokaix

I might experience a similar issue. My cluster has 4 GPU nodes. First, I start a 4-nodes job with low priority, which gets scheduled and running. A little later I start two 2-nodes jobs with high priority. I would expect that the high priority jobs would preempt the first job, but it doesn't happen. Please refer to the attached files. volcano.zip /cc @k82cn

Maybe you can provide some logs: )

logs are in the zip file.

dmitsh avatar May 09 '24 01:05 dmitsh

@dmitsh , I think your case maybe different with this one. According the the log, it seems your case is that: do preemption for gang scheduling. I'd like to a google doc for this case, as we already have several discussion about that; it's time to close it :)

k82cn avatar May 09 '24 08:05 k82cn

volcano.zip

Your jobs are in same queue, and queue is overused. It need to preemp tasks in same queue. Please update to latest version in master branch which fix the issue high priority job preempt low priority job in same queue.

lowang-bh avatar May 09 '24 09:05 lowang-bh

I might experience a similar issue. My cluster has 4 GPU nodes. First, I start a 4-nodes job with low priority, which gets scheduled and running. A little later I start two 2-nodes jobs with high priority. I would expect that the high priority jobs would preempt the first job, but it doesn't happen. Please refer to the attached files. volcano.zip

@dmitsh We are reproducing this issue and finding the cause.

william-wang avatar May 09 '24 11:05 william-wang

I might experience a similar issue. My cluster has 4 GPU nodes. First, I start a 4-nodes job with low priority, which gets scheduled and running. A little later I start two 2-nodes jobs with high priority. I would expect that the high priority jobs would preempt the first job, but it doesn't happen. Please refer to the attached files. volcano.zip /cc @k82cn

Maybe you can provide some logs: )

logs are in the zip file.

Seems it's another issue in your case, and https://github.com/volcano-sh/volcano/pull/3230 has fixed it, please check your volcano version whether has included this pr: )

Monokaix avatar May 09 '24 11:05 Monokaix

/close

Monokaix avatar May 20 '24 12:05 Monokaix

@Monokaix: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

volcano-sh-bot avatar May 20 '24 12:05 volcano-sh-bot