[Bug]: glibc error: CPU does not support x86-64-v2
What happened?
Deploying the helm chart I got the following error
Fatal glibc error: CPU does not support x86-64-v2
Can you tell me why litellm requires x86-64-v2 instructions?
Relevant log output
Fatal glibc error: CPU does not support x86-64-v2
stream closed EOF for litellm/litellm-deployment-7f46fd5bdb-wd2kv (litellm-container)
Are you a ML Ops Team?
Yes
What LiteLLM version are you on ?
1.65.0-stable
Twitter / LinkedIn details
No response
any update on this?
Same error here.
same here
Found the issue, if you are using proxmox, just to make sure you are using host in your CPU type under Hardware / processor menu
I've got the same issue here.
you can use this command: DOCKER_DEFAULT_PLATFORM=linux/amd64 docker compose build for build
same here
Hi everyone,
I had the same problem and after some time I gathered some more information and found a solution for the problem. (Current LiteLLM version: v1.73.1-nightly)
Root cause of the problem, hopefully useful for the LiteLLM team (ignore if you just want the solution):
- The base image used in the Dockerfile comes from Chainguard, which provides secure and up-to-date Python images for Docker.
- The requirement for x86-64-v2 support comes from the Chainguard image, and not from LiteLLM itself. If you clone the LiteLLM GitHub project and try to build it yourself, you will still find the same issue due to the base image, but if you use another image like python:3.12-slim you won't have the same build problem. So, the CPU compatibility problem comes from the Chainguard docker image, and not LiteLLM.
- Building from source code with Docker, using another base image, will still give you trouble as their Dockerfile seems highly customized for the Chainguard image. For me, it broke at line 25 due to non-existing files.
- I tried using the nightly and rc versions but had the same problem.
- The latest version on dockerhub worked on my CPU, but it's not updated and still has the resource error when loading the UI.
The solution:
LiteLLM can be installed using pip, so instead of building from the source code or running their docker compose, just create a Docker container with the pip installation of litellm, it will work beautifully. This is my Dockerfile that built and run on my Ubuntu Server:
FROM --platform=linux/amd64 python:3.12-slim
# Install system dependencies
RUN apt-get update && apt-get install -y \
gcc \
&& rm -rf /var/lib/apt/lists/*
# Install LiteLLM
RUN pip install litellm litellm[proxy] prisma
# Set the entrypoint (optional: specify host and port like normal 'litellm' command)
ENTRYPOINT ["litellm"]
Note that this Dockerfile is not very optimized for security, it's just a version that solves the CPU problem. Improve it from here to follow the best practices for Dockerfiles in production :)
@guilherme-deschamps THANK YOU!! I adapted your answer to generate the prisma binaries in my repo StNiosem/litellm-oldcpu, so I could use a database for Virtual Keys, and it works!
This is still a problem...
Found the issue, if you are using proxmox, just to make sure you are using host in your CPU type under Hardware / processor menu
This worked for me. Thanks a lot!
The solution:
LiteLLM can be installed using pip, so instead of building from the source code or running their docker compose, just create a Docker container with the pip installation of litellm, it will work beautifully. This is my Dockerfile that built and run on my Ubuntu Server:
FROM --platform=linux/amd64 python:3.12-slim # Install system dependencies RUN apt-get update && apt-get install -y \ gcc \ && rm -rf /var/lib/apt/lists/* # Install LiteLLM RUN pip install litellm litellm[proxy] prisma # Set the entrypoint (optional: specify host and port like normal 'litellm' command) ENTRYPOINT ["litellm"]Note that this Dockerfile is not very optimized for security, it's just a version that solves the CPU problem. Improve it from here to follow the best practices for Dockerfiles in production :)
Thanks, This worked for me.