gitpod
gitpod copied to clipboard
[research] Test GPU support in workspaces
Is your feature request related to a problem? Please describe
Kubernetes can make GPUs available in pods. Can those GPUs be used from within a Gitpod workspace?
Describe the behaviour you'd like
GPUs should be possible to use if the underlying cluster supports it.
How
Can we run Gitpod workspaces with GPU? Try changing the config map in-place for ws-manager
and workspace-templates
first, before any actual code changes. The expectation is that if code changes are needed, they'll be throw away, and not actually merged back tomain.
Intended output
Note: this is just a research task. We want to know where we stand today for scheduling workloads that need GPU, and how well Gitpod runs on them.
Questions
- Does Gitpod work on a node supporting GPUs? How well does it work?
- Can we use all of the cores of the GPU?
@lucasvaltl I've scheduled this work, as it'll require a code change for Workspace components.
@lucasvaltl @corneliusludmann I've removed Team Workspace from this issue, and added to the self-hosted inbox.
Assigned @metcalfc for know as he agreed on setting up a cluster for testing, if I am informed correctly. Please assign me to this issue when it's done. :pray:
@corneliusludmann I put a branch in the EKS guide that sets up a GPU nodegroup. So there are a couple of challenges. We don't have a custom AMI with all the nvidia support so I had to use the default which is AL2 so stick to fuse because shiftfs doesn't seem to work. Also I had to remove our bootstrap script so the nodes won't have the labels on them. But it should get things started with GPUs.
It would be quite awesome to have GPU support. I'm looking at a convenient way to spin-up a workspace that would allow me to play with cupy which requires Cuda and thus a Nvidia GPU. While one can either buy a dedicated machine or configure a dedicated VM for that, it is an expensive and time consuming investment if you are not planning to use it systematically.
I would like to try this in a self-hosted environment, but I am unable to find any documentation on workspace-templates
.
How should I go about adding the nvidia.com/gpu
entry to the resources.requests
list of the workspace pod?
I would like to try this in a self-hosted environment, but I am unable to find any documentation on
workspace-templates
. How should I go about adding thenvidia.com/gpu
entry to theresources.requests
list of the workspace pod?
Any thoughts on this?
:wave: @sigurdkb , we plan to make it available in the saas, but, as you can see, there are still some open questions and we don't have an estimate yet (as to which quarter we can release it).
Would you be interested in using GPU via our saas offering? If yes, please reach out to Andre via the calendly link he shared in this issue.
If you're still interested in using for self-hosted, let @atduarte know? I'm sure he'd be interested to create a separate issue (similar to the saas one I shared above).
@KelSolaar Would you be interested in using GPU via our saas offering? If yes, please reach out to Andre via the calendly link he shared in https://github.com/gitpod-io/gitpod/issues/10650.
π @sigurdkb , we plan to make it available in the saas, but, as you can see, there are still some open questions and we don't have an estimate yet (as to which quarter we can release it).
Would you be interested in using GPU via our saas offering? If yes, please reach out to Andre via the calendly link he shared in this issue.
If you're still interested in using for self-hosted, let @atduarte know? I'm sure he'd be interested to create a separate issue (similar to the saas one I shared above).
Saas is not a viable option for us. @atduarte, I'm still very interested in getting this to work for self-hosted π
@sigurdkb we are too :) SaaS comes first as it is the best way we have to learn and experiment ourselves, and then help others bring the same experience to Gitpod Self-Hosted.
It would be very valuable to me to better understand your needs. If you are willing, hereβs my calendly link: https://calendly.com/andre-gitpod/15-minute-product-feedback
We would also be very much interested in such a feature for self-hosted β not particularly GPU support but device plugin support in general. We use the smarter device manager as a very simple way to allow access to /dev/kvm
within our cluster. With this, all we need to do is to add the smarter/kvm
resource to the pod.
For the Gitlab runner we built a custom MutatingAdmissionController to inject the resource based on annotations the runner sets. We could go this path for Gitpod as well (I'm actually already creating a PoC) β but due to the lack of the ability to set custom annotations or labels for certain workspace pods it will be an all-or-nothing solution. Therefore, some integration for custom resources (or at least custom annotations or labels) would be appreciated.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Any updates on GPU support? Given the models I'd like to run, 8-16GB of RAM is a starting point for inferencing. Thanks!
I'm not entirely sure how this could be integrated into workspace images. However, it appears that the Nvidia Tesla V100 can be pre-built into a Docker image (dead link). Has anyone tried this approach?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Any news ?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.