faas icon indicating copy to clipboard operation
faas copied to clipboard

Research: show GPU attached to a function

Open alexellis opened this issue 7 years ago • 40 comments

Description

Show a GPU attached to an OpenFaaS function in Kubernetes

Background

We have several users using Python for data-science where GPU acceleration is available. From the investigation I've done so far we should be able to make a few minor changes to faas-netes and then be able to mount a GPU into a function.

Tasks

  • List compatible GPUs
  • Write some code to mount a GPU
  • Produce a short list of steps to document how to test the patches/PR
  • Document any specific requirements / limitations

Other notes

GKE has GPUs available pre-configured under Kubernetes - I think this would be the easiest way to test - https://thenewstack.io/getting-started-with-gpus-in-google-kubernetes-engine/

Otherwise you'll need an Nvidia GPU and the process for configuring your kubelet is not trivial

Documentation page from Kubernetes:

https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/

alexellis avatar Apr 12 '18 15:04 alexellis

Per these docs https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/

There are two core changes

  1. There is a new schedulable resource nvidia.com/gpu, this is a required change.
  2. it is possible to have mixed types of resources, so they recommend using node lables and node selectors to ensure that your pod ends up on the node with the specific GPU you are looking for (this is probably optional and very advanced).

The simplest example of a pod using a GPU is provided as

apiVersion: v1
kind: Pod
metadata:
  name: cuda-vector-add
spec:
  restartPolicy: OnFailure
  containers:
    - name: cuda-vector-add
      # https://github.com/kubernetes/kubernetes/blob/v1.7.11/test/images/nvidia-cuda/Dockerfile
      image: "k8s.gcr.io/cuda-vector-add:v0.1"
      resources:
        limits:
          nvidia.com/gpu: 1 # requesting 1 GPU

Note the resources.limits. This field has very particular restrictions that are different from CPUs. These are listed as

  • GPUs are only supposed to be specified in the limits section, which means:
  • You can specify GPU limits without specifying requests because Kubernetes will use the limit as the request value by default.
  • You can specify GPU in both limits and requests but these two values must be equal.
  • You cannot specify GPU requests without specifying limits.
  • Containers (and pods) do not share GPUs. There’s no overcommitting of GPUs.
  • Each container can request one or more GPUs. It is not possible to request a fraction of a GPU.

I think most of these changes will be made in the request struct and in the stack file schema https://github.com/openfaas/faas/blob/master/gateway/requests/requests.go#L47 and https://github.com/openfaas/faas-cli/blob/master/stack/schema.go#L50

Modifying the FunctionRequests struct would be the absolute minimum required change.

To support the mixed GPU case, we need to support allowing the developer to specify a nodeSelector, e.g.

apiVersion: v1
kind: Pod
metadata:
  name: cuda-vector-add
spec:
  restartPolicy: OnFailure
  containers:
    - name: cuda-vector-add
      image: "k8s.gcr.io/cuda-vector-add:v0.1"
      resources:
        limits:
          nvidia.com/gpu: 1
  nodeSelector:
    accelerator: nvidia-tesla-p100

This would be adding a new option to the http requests and the stack schema.

LucasRoesler avatar Apr 12 '18 15:04 LucasRoesler

We already cover the node selector via stack constraints.

PS. I think this issue should be moved to faas-netes since it's Kubernetes specific.

stefanprodan avatar Apr 12 '18 16:04 stefanprodan

I would like project research and initiatives to start out here in the FaaS repo for visibility.

Thanks for the comments Lucas.

alexellis avatar Apr 13 '18 05:04 alexellis

FYI: https://github.com/dkozlov/openfaas-tensorflow-gpu

dkozlov avatar Apr 15 '18 21:04 dkozlov

That project looks like a useful example.

  • I can’t see a patch for the GPU. Does it “just work”?
  • could you run two different functions using GPU at the same time?
  • is it demonstrably faster on GPU vs. CPU?

alexellis avatar Apr 16 '18 12:04 alexellis

@dkozlov we had some discussion about this on Slack.. please can you summarize the points for the community?

alexellis avatar Apr 25 '18 17:04 alexellis

Sorry for late response,

I can’t see a patch for the GPU. Does it “just work”?

Yes, it "just work" after installing nvidia-docker

could you run two different functions using GPU at the same time?

Yes, I can

is it demonstrably faster on GPU vs. CPU?

It depends on how you utilize your GPU, but in most cases neural networks on GPU is demonstrably faster than CPU

dkozlov avatar May 09 '18 19:05 dkozlov

could you run two different functions using GPU at the same time? Yes, I can

I'm confused by this comment - I thought we were talking about scheduling constraints on Slack because two Pods cannot use the same GPU at the same time?

alexellis avatar May 09 '18 19:05 alexellis

I have found following problems with native Schedule GPUs:

Containers (and pods) do not share GPUs. There’s no overcommitting of GPUs. - Each container can request one or more GPUs. It is not possible to request a fraction of a GPU.

As workaround I have implemented following:

  • Install only nvidia-docker, do not install k8s-device-plugin
  • Add label to GPU nodes (sudo kubectl label nodes node1 node2 label=gpu) and to your OpenFaaS function
    labels:
       label: gpu
    constraints:
     - "label=gpu"

dkozlov avatar May 09 '18 19:05 dkozlov

I'm confused by this comment - I thought we were talking about scheduling constraints on Slack because two Pods cannot use the same GPU at the same time?

If you install only NVIDIA drivers, docker and nvidia-docker is enough to start GPU docker containers by kubernetes without any device plugin.

Also I have found two outdated guides for openshift which not support overcommitting of GPUs: https://blog.openshift.com/use-gpus-openshift-kubernetes/ https://blog.openshift.com/use-gpus-with-device-plugin-in-openshift-3-9/

Some useful information from ClarifAI: https://clarifai.com/blog/scale-your-gpu-cloud-infrastructure-with-kubernetes

dkozlov avatar May 09 '18 19:05 dkozlov

My question was: "could you run two different functions using [the same] GPU at the same time?" (expecting an answer of no) and you answered "Yes, I can". Are we talking about the same thing? I thought GPUs could only be used by a single container/Pod at a time?

alexellis avatar May 10 '18 07:05 alexellis

I could repeat it again: "Yes, it possible" :). It was even possible in 2016, see ClarifAI blog post allow multiple pods on the same machine to share the same card, even if you know what you’re doing (at least on paper: ask us about this one weird trick to do just that!). Have you checked https://github.com/dkozlov/openfaas-tensorflow-gpu manual?

dkozlov avatar May 10 '18 16:05 dkozlov

My question was: "could you run two different functions using [the same] GPU at the same time?" (expecting an answer of no) and you answered "Yes, I can". Are we talking about the same thing? I thought GPUs could only be used by a single container/Pod at a time? If you try nvidia-docker you could use single GPU by more than one container at a time.

https://github.com/NVIDIA/nvidia-docker/wiki/Frequently-Asked-Questions#can-i-share-a-gpu-between-multiple-containers

Can I share a GPU between multiple containers?

Kubernetes GPU support proposal:
https://github.com/kubernetes/community/blob/master/contributors/design-proposals/resource-management/gpu-support.md

Yes. This is no different than sharing a GPU between multiple processes outside of containers.
Scheduling and compute preemption vary from one GPU architecture to another (e.g. CTA-level, instruction-level).

@alexellis According to the issue https://github.com/kubernetes/kubernetes/issues/52757 From @flx42:

By default, kernels from different processes can't run on one GPU simultaneously (concurrency but not parallelism)

So @flx42 means that it is possible to share NVIDIA device between multiple containers but only in concurrency mode by original NVIDIA design.

https://docs.nvidia.com/deploy/pdf/CUDA_Multi_Process_Service_Overview.pdf

Three Compute Modes are supported via settings accessible in  nvidia-smi.
PROHIBITED - the GPU is not available for compute applications.
EXCLUSIVE_PROCESS - the GPU is assigned to only one process at a time, and individual process threads may submit work to the GPU concurrently.
DEFAULT - multiple processes can use the GPU simultaneously. Individual threads of each process may submit work to the GPU simultaneously.

So by default multiple processes can use the GPU simultaneously even without using "Multi-Process Service"

dkozlov avatar May 10 '18 16:05 dkozlov

You are both correct :)

@alexellis

My question was: "could you run two different functions using [the same] GPU at the same time?" (expecting an answer of no) and you answered "Yes, I can". Are we talking about the same thing? I thought GPUs could only be used by a single container/Pod at a time?

This is correct in the scope of Kubernetes, GPU resources are integer values and will belong to a single container. Unless you try to hack around it, that is :) In the Kubernetes issue linked above, I was trying to pitch the idea of sharing a GPU across all the containers in a single pod.

@dkozlov

So @flx42 means that it is possible to share NVIDIA device between multiple containers but only in concurrency mode by original NVIDIA design.

This is also correct. If you launch containers manually on your machine, you can launch 10 containers accessing the same GPU, no problem. You can also launch 10 processes outside containers, it's not different.

Let's not even talk about Multi Process Service (MPS) for now, you probably want to start with just the upstream GPU support in K8s. You can find more information in the Volta whitepaper, section VOLTA MULTI-PROCESS SERVICE http://images.nvidia.com/content/volta-architecture/pdf/volta-architecture-whitepaper.pdf

flx42 avatar May 11 '18 17:05 flx42

This is correct in the scope of Kubernetes, GPU resources are integer values and will belong to a single container. Unless you try to hack around it, that is :) In the Kubernetes issue linked above, I was trying to pitch the idea of sharing a GPU across all the containers in a single pod.

@flx42 Which another tricks/hacks we could do for overcommitting of GPUs (using single GPU by multiple pods) in scope of Kubernetes? I am asking because OpenFaaS scales by pods and it could not be scaled by containers in a single pod.

dkozlov avatar May 11 '18 18:05 dkozlov

I don't think you should try to hack around the official upstream support: that means don't overcommit GPUs.

If you need to run multiple pods for the same function, you will need multiple GPUs.

flx42 avatar May 11 '18 20:05 flx42

FYI: https://github.com/Microsoft/KubeGPU seems that Microsoft trying to solve this problem

dkozlov avatar May 13 '18 18:05 dkozlov

@flx42 thanks for your input :+1: I would like to figure out what we need to do in the project to make it easy to consume GPU in a function on GKE or a bare-metal node / VM with nvidia-docker swapped in. If you'd like to collaborate on this we are also talking on Slack.

alexellis avatar May 13 '18 20:05 alexellis

I think you should embrace the current upstream support, including its limitations. If you assume that the cluster is already configured with the NVIDIA device driver, the device plugin and optionally taints/tolerations (see this article), then you can just schedule pods consuming resources of type nvidia.com/gpu.

For the sake of simplicity and to avoid falling into suboptimal scheduling corner cases, I think you should limit the initial implementation to 1 GPU per container. i.e. nvidia.com/gpu: 1

flx42 avatar May 14 '18 17:05 flx42

Would either of you be interested in helping to implement that within the project?

alexellis avatar May 23 '18 21:05 alexellis

@flx42 GPUs in the cloud are very heavy-weight and expensive. What could I buy to use at home for testing this work and ensuring the GPU support is stable?

Do you or @dkozlov have a good container or some sample code that can verify that it has used or is using a GPU? That would be ideal for our testing and proving that things are working end to end.

alexellis avatar Jun 21 '18 09:06 alexellis

I'm working on a patch that will enable scheduling functions in k8s if there is an extended resource exposed, by let's say a suitable device plugin, such as [this]. The work includes changes to faas-netes and faas-cli, and a minor one to the FunctionResources struct that is from faas. Naturally faas-cli and faas patches will not be k8s specific.

feri avatar Jun 21 '18 10:06 feri

Do you or @dkozlov have a good container or some sample code that can verify that it has used or is using a GPU?

@alexellis https://hub.docker.com/r/tensorflow/tensorflow/ nvidia-docker run -it -p 8888:8888 tensorflow/tensorflow:latest-gpu or sample code:

python3 -c "from tensorflow.python.client import device_lib; print(device_lib.list_local_devices())"

What could I buy to use at home for testing this work and ensuring the GPU support is stable?

Open https://developer.nvidia.com/cuda-gpus -> CUDA-Enabled GeForce Products -> Select any GPU by Compute Capability >= 6.1

dkozlov avatar Jun 21 '18 19:06 dkozlov

Derek add label: Hacktoberfest

alexellis avatar Oct 03 '18 08:10 alexellis

Will this work on hosts with >1 GPU? I have a computer with two GTX 1080 TI's that I use for training or bulk inference. NVIDIA allows you to peg a docker container to a single GPU via an environment variable. NVIDIA_VISIBLE_DEVICES=0 would restrict that container to the first GPU while NVIDIA_VISIBLE_DEVICES=1 goes to GPU with index 1, etc.

sberryman avatar Oct 25 '18 21:10 sberryman

@sberryman yes, our device plugin implementation supports multiple GPUs on one node and set this environment variable accordingly for the container.

flx42 avatar Oct 25 '18 21:10 flx42

1、gpu device plugin support proposal with k8s https://github.com/kubernetes/community/blob/master/contributors/design-proposals/resource-management/device-plugin.md 2、how to using nvidia gpu (101) https://github.com/NVIDIA/k8s-device-plugin https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v1.11/nvidia-device-plugin.yml

zouyee avatar Nov 28 '18 02:11 zouyee

Bump

alexellis avatar Jan 06 '19 15:01 alexellis

Bringing this to @dieterreuter attention, with the Jetson Nano as a target device for experimentation.

vielmetti avatar May 04 '19 19:05 vielmetti

@johnmccabe rebuilt his kennel to use the GPU in Docker.

alexellis avatar May 04 '19 19:05 alexellis