k8s-device-plugin icon indicating copy to clipboard operation
k8s-device-plugin copied to clipboard

Advertising specific GPU types as separate extended resource

Open deepanker-s opened this issue 11 months ago • 14 comments

Hello, I am working at Uber.

1. Feature description

Advertising special hardware (specific GPU types say A100) as a separate extended resource.

As of now, we have a blanket of "nvidia.com/gpu" for all types of GPUs that this plugin supports. If we want our pods to run specifically on some GPU types, then, we need to be able to request such a resource.

For requesting such a specific resource, there are 2 ways -

  1. [Existing] Using nodeLabels/nodeSelectors
  2. [New] Advertising the same directly as a new resource such as "nvidia.com/gpu-A100-...."

This added functionality can be enabled based upon a configuration flag and can use gpu-feature-discovery labels to extract the SKU/GPU type.

2. Why

  1. There is already a similar resource advertising being done for MIG enabled devices -
nvidia.com/gpu
nvidia.com/mig-1g.5gb
nvidia.com/mig-2g.10gb
nvidia.com/mig-3g.20gb
nvidia.com/mig-7g.40gb
  1. Another reason is that usage of nodeLabels/nodeSelectors may not be possible due to some limitations.

3. Similar existing work

I found a design doc for "Custom Resource Naming and Supporting Multiple GPU SKUs on a Single Node in Kubernetes".

It is actually advertising different types of GPUs as new resource name but those different GPU cards should be on the same node. I am not sure whether the same will also support if the corresponding GPU cards/types are on different nodes as well.

4. Summary of queries

  1. Is the above feature request already being supported by the above mentioned "Similar existing work"?
  2. If yes, when will that work be approved and available?

deepanker-s avatar Jul 21 '23 08:07 deepanker-s

There is still no planned support for this in the k8s-device plugin. All of the functionality is there (as described in the link you provided), but it is explicitly disabled by this line in the code https://github.com/NVIDIA/k8s-device-plugin/blob/main/cmd/nvidia-device-plugin/main.go#L239.

The future for supporting multiple GPU cards per node is via a new mechanism in Kubernetes called Dynamic Resource Allocation (DRA): https://docs.google.com/document/d/1BNWqgx_SmZDi-va_V31v3DnuVwYnF2EmN7D-O_fB6Oo/edit https://github.com/NVIDIA/k8s-dra-driver

klueska avatar Jul 24 '23 08:07 klueska

Hey Kevin, Thanks for the info.

I was actually asking about specific GPU resource naming for GPUs on different nodes (not on same node). But looks like, the answer seems to be the same. DRA can help achieve that as well.

deepanker-s avatar Jul 26 '23 10:07 deepanker-s

Hey Kevin, I understand now that DRA can be used to specify GPU types (A100, H100) for different pods using "GpuClaimParameters".

Is there any functionality to advertise these specified resources/resourceClaims?

Example - Using DRA "GpuClaimParameters") (as in gpu-test6 example), if -

  • podA is scheduled on A100 GPU
  • podB is scheduled on H100 GPU

Will device plugin advertise the resource usage details - how many A100 devices are being used? Like currently we adverstise - nvidia.com/gpu : 10

Will it provide details such as below in any manner? nvidia.com/gpu-A100 : 5

deepanker-s avatar Aug 08 '23 12:08 deepanker-s

We're looking to install the yunikorn scheduler on the cluster and having different resources for different GPUs will help a lot to prioritize the use of more powerful (and less available) GPUs among users using the fair share. It's impossible to do with just labels.

dimm0 avatar Sep 27 '23 23:09 dimm0

There is still no planned support for this in the k8s-device plugin

Is there a reason why this isn't planned to be implemented here? This seems like an essential feature for any cluster with more than 1 model of GPU and there's currently no adequate workaround at all.

sjdrc avatar Oct 19 '23 11:10 sjdrc

It was a product decision, not an engineering one.

All of the code to support it is merged in the plugin and simply disabled by https://github.com/NVIDIA/k8s-device-plugin/blob/main/cmd/nvidia-device-plugin/main.go#L239.

The decision not to support this gets revisited periodically, but our product team is still not in favor of it, so our hands are tied.

If you want to enable it in a custom build of the plugin, just remove that line referenced above and it should work as described in https://docs.google.com/document/d/1dL67t9IqKC2-xqonMi6DV7W2YNZdkmfX7ibB6Jb-qmk/edit#heading=h.jw5js7865egx.

klueska avatar Oct 19 '23 11:10 klueska

@klueska thanks for the explanation. We also explored the extended resource options and we even have a component we wrote ourselves to patch node with gpu extended resources. Just curious would you be open to add a flag to turn this feature on/off so we don't have to deploy a customized version of nvidia device plugin?

yuzliu avatar Oct 22 '23 22:10 yuzliu

@yuzliu Do you have multiple GPU types per node? If not, are node-labels from GFD / nodeSelectors not enough for your use case?

klueska avatar Nov 01 '23 11:11 klueska

@klueska Thanks for the reply! We don't have multiple GPU types per node but we do have multiple GPU types per cluster. We have already deployed the GPU feature discovery and have gpu product label on each GPU node but that doesn't solve our problem because:

  1. We have clusters having multiple GPU types e.g. A100 + T4 mixed in one cluster
  2. We have resourcequota on each namespace and we want to achieve resource quota enforcement e.g. namespace A can only use 1 A100 and 5 T4 on a namespace level.
  3. We want to collect metrics accurately per each GPU type. For example we'd like to know namespace A has 4 A100 available and 1 A100 was requested but still have 3 left.

yuzliu avatar Nov 01 '23 11:11 yuzliu

Got it -- labels from GPU feature discovery are sufficient for 1, but not for 2 and 3 -- for that you need a unique extended resources.

klueska avatar Nov 01 '23 11:11 klueska

Yep, we even have an internal component to advertise extended resources e.g. V100, A100 and T4. But I'd really love to have less customized logic internally but rely on Nvidia's official component to make our long term maintenance easier.

yuzliu avatar Nov 01 '23 11:11 yuzliu

This issue has become stale and will be closed automatically within 30 days if no activity is recorded.

github-actions[bot] avatar Feb 27 '24 04:02 github-actions[bot]

Any progress on this issue?

leoncamel avatar Apr 18 '24 03:04 leoncamel