ramalama icon indicating copy to clipboard operation
ramalama copied to clipboard

Flag to Silence CUDA Version Warnings

Open HoustonBoston opened this issue 2 months ago • 3 comments

Feature request description

I would like to silence any warnings related to the CUDA version for GPUs that don't meet the minimum required CUDA version.

cloudcomp@unibot-workbench:~$ ramalama run gemma3
2025-11-13 19:32:37 - WARNING - CUDA version (11, 4) is not supported. Minimum required version is 12.4.: Falling back to default image.
WARNING:ramalama:CUDA version (11, 4) is not supported. Minimum required version is 12.4.: Falling back to default image.
2025-11-13 19:32:37 - WARNING - CUDA version (11, 4) is not supported. Minimum required version is 12.4.: Falling back to default image.
WARNING:ramalama:CUDA version (11, 4) is not supported. Minimum required version is 12.4.: Falling back to default image.
🦭 > what is umass lowlel
UMass Lowell is a fascinating and unique institution! Here's a breakdown of what makes it special, particularly focusing on the "Lowell" part and the "UMass" connection:

**1. What is UMass Lowell?**

* **Part of the University of Massachusetts System:** UMass Lowell is one of the five campuses that make up the University of Massachusetts system. The others are UMass Amherst, UMass Boston, UMass Dartmouth, and UMass Chan Medical School.
* **Historically a Technical College:**  Originally founded as Lowell Textile Technological Institute in 1918, UMass Lowell has a deeply rooted history in manufacturing, textiles, and engineering. This legacy is *still* a huge part of its identity.

Suggest potential solution

Perhaps a -s or -S flag to silence warnings?

Have you considered any alternatives?

No alternatives exist.

Additional context

No response

HoustonBoston avatar Nov 13 '25 21:11 HoustonBoston

A better approach may be to add a config option to skip the cuda version check, similar to the no_missing_gpu_prompt option https://github.com/containers/ramalama/blob/508625bcc158b71d1eb40b5bfc11786c4eb86851/ramalama/config.py#L116

olliewalsh avatar Nov 14 '25 12:11 olliewalsh

Does the GPU actually work, or is llama.cpp failing over to CPU only?

rhatdan avatar Nov 14 '25 14:11 rhatdan

You can also change the image flag in ramalama.conf to always use quay.io/ramalama/ramalama image.

rhatdan avatar Nov 17 '25 16:11 rhatdan

A friendly reminder that this issue had no activity for 30 days.

github-actions[bot] avatar Dec 18 '25 00:12 github-actions[bot]