ramalama
ramalama copied to clipboard
Consolidate with instructlab container images
We should consolidate our efforts with instructlab and share container base images:
https://github.com/instructlab/instructlab/tree/main/containers
2 issues with this at present:
- Pulls from non public locations: nvcr.io/nvidia/cuda:12.4.1-devel-ubi9
- Can find where these images are published in general.
- Pulls from non public locations: nvcr.io/nvidia/cuda:12.4.1-devel-ubi9
this is not necessarily an issue, in case a token based auth can be given to the oci registry? 🤔 wdyt
- Pulls from non public locations: nvcr.io/nvidia/cuda:12.4.1-devel-ubi9
this is not necessarily an issue, in case a token based auth can be given to the oci registry? 🤔 wdyt
If we can make it work I'm happy :smile:
We also have to think of ways of auto-detecting the primary GPU (that's kinda separate to this issue), I have an idea of how to do that for AMD GPU, but for Nvidia, not sure... Then we automatically pull the relevant container image, set up podman with the correct holes, etc.
Presence of "blacklist=nouveau" in /proc/cmdline is one idea, another one is presence of /proc/driver/nvidia/gpus directory, another one is presence of "nvidia-smi"...
AMD has a nice easy to use, fast API (the check has to be quick also) to check the VRAM size of each AMD GPU present in a system which is quite nice, the GPU with the most VRAM can be selected.
It may be a case where nothing is absolutely perfect also, so we also will have to introduce a command-line flag to manually select GPU (also sometimes one may not want to use primary GPU, etc.)
@tarilabs I'm also unsure if the instructlab team plan on maintaining/publishing those in future so maybe we should create our own...
We need to create open versions of those images, and store them in quay.io/ramalama repository, or if they want to maintain them, I would be fine with using a different repo. An issue might be on content that is not allowed to be shipped as a container image, but only pulled from an upstream vendor.
Here's another image I was pointed towards that will be useful:
https://github.com/rh-aiservices-bu/llm-on-openshift/blob/main/llm-servers/vllm/gpu/Containerfile
this will be a useful reference for our image with vllm runtime. It's UBI9 based which is exactly what we want