gpushare-scheduler-extender icon indicating copy to clipboard operation
gpushare-scheduler-extender copied to clipboard

Can schedule full GPUs and partial GPUs side-by-side

Open reverson opened this issue 5 years ago • 4 comments

Hi,

When scheduling GPUs I can schedule a partial GPU request, and a full GPU on the same GPU.

Allocatable:
 aliyun.com/gpu-count:  1
 aliyun.com/gpu-mem:    7
Allocated resources:
  Resource              Requests     Limits
  --------              --------     ------
  aliyun.com/gpu-count  1            1
  aliyun.com/gpu-mem    7            7

I have a total of 3 pods running on this machine, 1 requesting 1 gpu-mem, 1 requesting 6 gpu-mem, and another requesting 1 gpu-count.

I would expect that the gpu-sched would deduct a full GPUs worth of memory from the allocatable once the full GPU pod has been scheduled.

reverson avatar Mar 11 '19 17:03 reverson

The pod with gpu-count can be scheduled but it won't be enabled with GPU capabilities in device plugin.

It's just an extended resource for keeping the GPU count. See gpu count definition.

I think I can make gpu-count disable in scheduling time. Such as aliyun.com/gpu-count is not allowed to schedule, it's only for keeping the info of GPU count. What's your idea?

cheyang avatar Mar 11 '19 18:03 cheyang

Oh, interesting, what I'm seeing is that I'm getting access to the GPUs regardless of what I select for the GPU request (gpu-mem, gpu-count, or no gpu at all). However, I think my teammate discovered that using the cuda-vector-add example from the k8s documentation causes the nvidia-docker2 runtime to auto add all the GPUs to the docker image.

In terms of workflow I'd like to be able to use this plugin to manage both our workflow for shared GPUs as well as whole GPUs.

If that's not possible, then I'd definitely prefer to see gpu-count be an unscheduled resource as you describe.

reverson avatar Mar 11 '19 18:03 reverson

I think the reason is that you are using nvidia's cuda docker base image which includes the environment [NVIDIA_VISIBLE_DEVICES=all] https://github.com/NVIDIA/nvidia-container-runtime#nvidia_visible_devices. That cause the nvidia-docker2 load nvidia runtime for the container. I think you can set NVIDIA_VISIBLE_DEVICES=void when building your docker image.

cheyang avatar Mar 11 '19 23:03 cheyang

你好 我在运行示例的时候报这个错误nvidia-container-cli: device error: unknown device id: no-gpu-has-1024MiB-to-run,请问一下怎么解决这个问题,和显卡驱动有关系吗?

cicijohn1983 avatar Sep 19 '19 01:09 cicijohn1983