Bitnami kubectl image change – migration plan for Velero helm chart
Describe the problem/challenge you have The Velero Helm chart pulls the bitnami/kubectl image https://github.com/vmware-tanzu/helm-charts/blob/main/charts/velero/values.yaml#L316-318. Bitnami has announced that its public container catalog will no longer be maintained https://github.com/bitnami/charts/issues/35164.
Describe the solution you'd like Is there a migration plan to an actively maintained kubectl image?
Anything else you would like to add: This deprecation might turn out to be a non-issue, but I wanted to flag it early in case action is needed.
Commented this on the linked PR, but to note here:
Due to the k8s registry kubectl image not having a shell (and the same is true with most other kubectl images) this work might also require fixing https://github.com/vmware-tanzu/helm-charts/issues/571 unless there's a viable alternative kubectl image with which, sh, and kubectl. The last time I looked at that issue it seemed like we might need to make Velero CLI changes since writing the CRDs to an output file (in a shared volume) was not possible without a shell.
As workarond tried to change repository for kubectl image using bitnamisecure/kubectl for kubectl.image.repository.
For the job upgrade-crds it seems to have worked.
bitnamisecure
bitnamisecure images doesn't it require a subscription ? this article says so https://news.broadcom.com/app-dev/broadcom-introduces-bitnami-secure-images-for-production-ready-containerized-applications
@rajivml yes they require subscription, that's the whole reason for this shift 💵
unless there's a viable alternative kubectl image with which, sh, and kubectl
The image eclipsefdn/kubectl should do.
@nemobis
unless there's a viable alternative kubectl image with which, sh, and kubectl
The image eclipsefdn/kubectl should do.
This and most of the kubectl images ends with similar error:
exec /tmp/sh: no such file or directory
How about https://github.com/alpine-docker/k8s
@kaovilai alpine images are based on busybox. They don't have a sh either, just a symlink that can't be copied over.
From my reading bitnamisecure is free on the latest tag, why can't we use it?
Do we require non latest? Or does anyone expect to create a support ticket for kubectl image?
For production workloads and long-term support, users are encouraged to adopt Bitnami Secure Images
I don't think we'd need long term support as latest kubectl in general are IMO stable towards older versions.
cgr.dev/chainguard/kubectl:latest-dev Think this might work. Haven't test
I generally agree that latest would probably be fine. The current way the helm chart works is dynamically selecting the kubectl image based on your cluster version, but in general we shouldn't hit issues since this is a very basic kubectl apply it is used for. Using latest in general isn't the best of course, so might want to consider pinning a SHA as well to prevent any drift/issues arising.
That might be the best short-term solution without CLI changes (https://github.com/vmware-tanzu/velero/pull/9132). Chainguard's latest-dev image is pretty good and I previously used it with the Velero chart so it should work, or bitnamisecure/kubectl:latest (latest is also free with bitnamisecure).
I just tried both the of latest bitname images (bitnami/kubectl and bitnamisecure/kubectl) and interestingly the bitnamesecure image did not work with the crd process. Did anyone also try it out and what was the result?
for me, both bitnamisecure/kubectl:latest and cgr.dev/chainguard/kubectl:latest-dev don't work with the crd process
For now, I recommend switching to bitnamilegacy/kubectl in combination with a tagged version.
See: https://hub.docker.com/r/bitnamilegacy/kubectl/tags
I tested this and it works fine with the CRD upgrade Job. The images are identical to bitnami/kubectl.
But this comes with two caveats:
-
bitnamilegacy/kubectlwill not receive any updates or new versions. So this is just an interim solution. - It is unclear how long Bitnami will provide the
bitnamilegacy. They have hinted that it will be shut down sooner or later.
In any case, we need a substitute image soon. I propose registry.k8s.io/kubectl. That should be as official as it can get and should get rid of third party unreliability.
I have made this pull request https://github.com/vmware-tanzu/helm-charts/pull/706 to remove the need for Bitnami kubectl and shell.
The idea is to first generate the crds yaml and then to apply them using using kubectl, so you don't need shell :)
I like that @albundy83
After a comment from @mjnagel and see his marvelous pull request here, I have updated mine to use his new work in progress.
We won't need anymore kubectl binary and piping stuff to apply CRDs, velero will do the job directly using command velero install --crds-only --apply.
We still need kubectl binary for CRDs cleanup Job here but kubectl command can be called directly without the need for shell (but who might want to remove velero ? :-) ).
We also need it for label namespace Job here and same as cleanup Job, we can call kubectl command directly.
Here a short term fix : https://github.com/vmware-tanzu/helm-charts/pull/707
@albundy83, I believe this will break if deployed on a Kubernetes 1.34 cluster.
Yes : https://hub.docker.com/r/bitnamilegacy/kubectl/tags?name=1.34
But if we don't do anything, it will be break for 1.34 and for all others releases after September the 29th (see here).
I suggest locking the image tag in the chart to 1.33.4, as it's the latest one available, until we have a proper fix.
Maybe it's better to not do that :) People who have Kubernetes 1.34 will just override it with the release you mentioned.
For now, I recommend switching to
bitnamilegacy/kubectlin combination with a tagged version. See: https://hub.docker.com/r/bitnamilegacy/kubectl/tagsI tested this and it works fine with the CRD upgrade Job. The images are identical to
bitnami/kubectl.But this comes with two caveats:
* `bitnamilegacy/kubectl` will not receive any updates or new versions. So this is just an interim solution. * It is unclear how long Bitnami will provide the `bitnamilegacy`. They have hinted that it will be shut down sooner or later.In any case, we need a substitute image soon. I propose
registry.k8s.io/kubectl. That should be as official as it can get and should get rid of third party unreliability.
I agree that registry.k8s.io/kubectl seems like the best solution going forward
After switching the Velero kubectl image to registry.k8s.io/kubectl, I ran into an issue: the chart auto-detects the Kubernetes cluster version and sets the image tag without the v prefix (e.g. kubectl:1.34.1), but it actually requires the prefix (e.g. kubectl:v1.34.1).
It’s a small detail to keep in mind if we move to registry.k8s.io/kubectl, which in my opinion is a good solution.
For now, I’ve hardcoded my Kubernetes version:
kubectl:
image:
repository: registry.k8s.io/kubectl
tag: v1.34.1
@mydoomfr you can update the _helper.tpl like I have done here and you won't have to hard code anything
Actually, I spoke a bit too soon. The velero-upgrade-crds job is failing:
Normal Pulled 2s (x3 over 16s) kubelet Container image "registry.k8s.io/kubectl:v1.34.1" already present on machine Normal Created 2s (x3 over 15s) kubelet Created container: kubectl Warning Failed 2s (x3 over 15s) kubelet Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: exec: "/bin/sh": stat /bin/sh: no such file or directory Warning BackOff 2s (x2 over 14s) kubelet Back-off restarting failed container kubectl in pod velero-upgrade-crds-946ql_velero(38ed13e7-349d-4262-a861-69cc6974ccc6)
Since registry.k8s.io/kubectl:v1.34.1 is distroless...
Sorry for the noise; I missed the earlier comments about the lack of a shell.
How important is the upgrade-crds job ? I get the purpose, but how important is it until there is a proper fix ?
I just disabled it entirely for now (upgradeCRDs: false) due to sync issues.
Almost every single kubectl image doesnt have a shell, and the one that does have it is musl based 😢
Hello folks
Today our upgrade job blew up with :
Warning Failed 3m28s (x5 over 6m13s) kubelet Failed to pull image "docker.io/bitnamilegacy/kubectl:1.34": rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/bitnamilegacy/kubectl:1.34": failed to resolve reference "docker.io/bitnamilegacy/kubectl:1.34": docker.io/bitnamilegacy/kubectl:1.34: not found
What is the current accepted workaround for this ?