Velero CRD upgrade job failures
What steps did you take and what happened: When upgrading the helm chart for Velero, a job is created to upgrade the CRDs for Velero as well. Every deployment we have failures for the CRD job stating that:
/tmp/sh: /lib/aarch64-linux-gnu/libc.so.6: version `GLIBC_2.33' not found (required by /tmp/sh)
/tmp/sh: /lib/aarch64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /tmp/sh)
This comes when the job is starting on an amd64 based node, even though it seems like its looking for aarch64 (ARM). This is happening to all of our helm upgrades for Velero across all of our clusters (each have some different configuration).
What did you expect to happen:
The Velero CRD upgrade job to finish properly and the helm upgrade complete.
The following information will help us better understand what's going on:
Using helm chart version: 3.1.0
Environment:
- Velero version (use
velero version): 1.10.0 - Velero features (use
velero client config get features): - Kubernetes version (use
kubectl version): "1.26" - Kubernetes installer & version: Helm chart version 3.1.0
- Cloud provider or hardware configuration: AWS EKS
- OS (e.g. from
/etc/os-release): Amazon Linux 2
Vote on this issue!
This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.
- :+1: for "I would like to see this bug fixed as soon as possible"
- :-1: for "There are more important bugs to focus on right now"
Which Velero version do you want to upgrade to?
How is your configuration for the lines here? Have you reconfigured the initContainers images for the upgrade job?
We are not actually upgrading the version of Velero, but just changes some values to be applied. The change in helm values triggers the CRD upgrade.
For the initContainer we are using:
initContainers:
- name: velero-plugin-for-aws
image: (companyInternalArtifactory)/velero/velero-plugin-for-aws:v1.5.2
imagePullPolicy: IfNotPresent
and our kubectl setup is similar with:
kubectl:
image:
repository: (companyInternalArtifactory)/bitnami/kubectl
tag: "${kubectl_version}"
@jkoop144
I've tried it with bitnami/kubectl:latest and it's ok.
Maybe the tmp/sh binary in your image is not a static compilation.
Hello,
Same issue for me.
My build should be reproducible. I pinned chart version version to 4.0.1 and re-run pipeline after some time (with no changes to values). The upgrade CRD job is failing the same way:
$ kubectl logs velero-upgrade-crds-km5kq -c velero
/tmp/sh: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.33' not found (required by /tmp/sh)
/tmp/sh: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /tmp/sh)
Job is templated to correct image with correct SHA:
containers:
- args:
- -c
- /velero install --crds-only --dry-run -o yaml | /tmp/kubectl apply -f -
command:
- /tmp/sh
image: velero/velero:v1.11.1@sha256:fabf2d40f640019aed794f477013ec03f2a4b91e3f5aa80f9becdd8d040c5c6b
DockerHub says the image was pushed 8 months ago. But could it be changed during that time? I didn't bind image to SHA.
I tried to run job with tag 1.12.4 and that works. So the issue is in both 1.11.0 and 1.11.1.
@Lirt thanks for your further details information. Yes, I last tried 1.13.0 and it's ok But it has the above problem with v1.9.0, v1.10.0, v1.11.0, v1.12.0.
Is there a way to target 1.13.0 for the update crds job in the helm chart values so that I could bypass the versions containing the issue?
@qiuming-best we have released helm chart with velero v1.13.0. Is this issue still present?
Upgrading to version 6.1.0 of the chart fixed this issue for me.
Thanks for the info, closing it.