ramalama icon indicating copy to clipboard operation
ramalama copied to clipboard

Consolidate with instructlab container images

Open ericcurtin opened this issue 1 year ago • 7 comments

We should consolidate our efforts with instructlab and share container base images:

https://github.com/instructlab/instructlab/tree/main/containers

ericcurtin avatar Aug 13 '24 16:08 ericcurtin

2 issues with this at present:

  1. Pulls from non public locations: nvcr.io/nvidia/cuda:12.4.1-devel-ubi9
  2. Can find where these images are published in general.

ericcurtin avatar Aug 28 '24 11:08 ericcurtin

  1. Pulls from non public locations: nvcr.io/nvidia/cuda:12.4.1-devel-ubi9

this is not necessarily an issue, in case a token based auth can be given to the oci registry? 🤔 wdyt

tarilabs avatar Aug 28 '24 12:08 tarilabs

  1. Pulls from non public locations: nvcr.io/nvidia/cuda:12.4.1-devel-ubi9

this is not necessarily an issue, in case a token based auth can be given to the oci registry? 🤔 wdyt

If we can make it work I'm happy :smile:

ericcurtin avatar Aug 28 '24 12:08 ericcurtin

We also have to think of ways of auto-detecting the primary GPU (that's kinda separate to this issue), I have an idea of how to do that for AMD GPU, but for Nvidia, not sure... Then we automatically pull the relevant container image, set up podman with the correct holes, etc.

Presence of "blacklist=nouveau" in /proc/cmdline is one idea, another one is presence of /proc/driver/nvidia/gpus directory, another one is presence of "nvidia-smi"...

AMD has a nice easy to use, fast API (the check has to be quick also) to check the VRAM size of each AMD GPU present in a system which is quite nice, the GPU with the most VRAM can be selected.

It may be a case where nothing is absolutely perfect also, so we also will have to introduce a command-line flag to manually select GPU (also sometimes one may not want to use primary GPU, etc.)

ericcurtin avatar Aug 28 '24 12:08 ericcurtin

@tarilabs I'm also unsure if the instructlab team plan on maintaining/publishing those in future so maybe we should create our own...

ericcurtin avatar Aug 28 '24 12:08 ericcurtin

We need to create open versions of those images, and store them in quay.io/ramalama repository, or if they want to maintain them, I would be fine with using a different repo. An issue might be on content that is not allowed to be shipped as a container image, but only pulled from an upstream vendor.

rhatdan avatar Sep 03 '24 19:09 rhatdan

Here's another image I was pointed towards that will be useful:

https://github.com/rh-aiservices-bu/llm-on-openshift/blob/main/llm-servers/vllm/gpu/Containerfile

this will be a useful reference for our image with vllm runtime. It's UBI9 based which is exactly what we want

ericcurtin avatar Sep 03 '24 19:09 ericcurtin