kubebuilder
kubebuilder copied to clipboard
Adding support for s390x architecture
What do you want to happen?
Currently Kubebuilder
releases binaries for several platforms including arm64
and ppc64le
.
I am exploring the possibility of making binary available on s390x
big-endian architecture.
I am able to successfully build and run test-unit
on s390x
architecture with following changes to Makefile
:
index 2d69e6f4..23575c1a 100644
--- a/Makefile
+++ b/Makefile
@@ -102,7 +102,11 @@ test: test-unit test-integration test-testdata test-book ## Run the unit and int
.PHONY: test-unit
test-unit: ## Run the unit tests
+ifeq (s390x,$(shell uname -p))
+ go test -v ./pkg/...
+else
go test -race -v ./pkg/...
+endif
.PHONY: test-coverage
test-coverage: ## Run unit tests creating the output to report coverage
However, make test
fails because it downloads incompatible kubebuilder-tools
from googleapis
storage location.
I suspect in order to add s390x
support we will require the following two steps at a minimum:
- Make available
s390x
compatiblekubebuilder-tools
on https://storage.googleapis.com/kubebuilder-tools - Modify
Makefile
to removerace
option fors390x
architecture
I am looking for direction on how to make this possible so that s390x
arch Kubebuilder
can be obtained directly from the Releases
page.
Thanks.
Extra Labels
No response
/assign @camilamacedo86
/cc @jmrodri
hi, does anyone know how kubebuilder-tools archive is produced? I want to build it for s390x. Is there any make target or script? I see the archive just contains binaries for kubectl, etcd and kube-apiserver. I produced kubectl and kube-apiserver binaries for s390x from the k8s repo but etcd isn't there (maybe i'm still missing smth).
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
/remove-lifecycle stale
I think we need to revisit this one.
@camilamacedo86 I'm not sure I fully understand the process but I see you are doing a similiar for arm64 in bug #2664 , are you able to trigger on the Google Cloud to generate the artefact for s390x ? I was able to workaround make test
by sideloading with setup-envtest, All three binaries (kubectl, kube-apiserver, etcd) already have s390x builds available. What more is required to get this on https://storage.googleapis.com/kubebuilder-tools?
HI @tdaleibm,
We have no kubebuilder-tools built for s390x. I cannot find this platform/arch here: https://storage.googleapis.com/kubebuilder-tools. Could you please point out what kubebuilder-tools tar.gz are you finding on this page that is built to support s390x architecture?
Also, we have not kube-proxy-rbac built for this; see: https://console.cloud.google.com/gcr/images/kubebuilder/GLOBAL/kube-rbac-proxy.
PS: For that, we need to update the branch: https://github.com/kubernetes-sigs/kubebuilder/tree/kube-rbac-proxy-releases
And we have not Kubebuilder CLI bin built for this format as well. So, we do not support this architecture and we do not build the binaries for that so far.
Sorry, I meant the individual packages usually contained in kubebuildre-tools ( so kubectl, kube-apiserver, and etcd ). I'm not sure how to packaging of those into https://storage.googleapis.com/kubebuilder-tools is being done for other architectures.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale @camilamacedo86 just checking if there is something I could do at my end to assist - tx!
I added needs-triage on this one. I think we can discuss it can. Also, freeze for it not to be closed.
After further discussions on the KB meeting at Aug 25, it was decided to accept this one. Maybe we could use docker buildx to provide this support.
The operator-sdk project supports s390x, the latest release shows we have s390x binaries: https://github.com/operator-framework/operator-sdk/releases/tag/v1.23.0
In our github actions we specify the platforms we support: https://github.com/operator-framework/operator-sdk/blob/master/.github/workflows/deploy.yml#L104-L113
The project's makefile
has image targets that can handle the different arches https://github.com/operator-framework/operator-sdk/blob/master/Makefile#L103-L116
@jmrodri @rposts @tdaleibm,
To accomplish this one we need to:
-
a) Ensure that envtest-tools ( https://storage.googleapis.com/kubebuilder-tools ) are built for this arch. To know how to do it see: https://github.com/kubernetes-sigs/kubebuilder/blob/master/RELEASE.md#to-build-the-kubebuilder-tools-artifacts-required-to-use-env-test (we need to check the branch)
-
b) ensure that the auth-proxy-image scaffold by default in the projects provides this support, see that we will also need to change another branch: https://github.com/kubernetes-sigs/kubebuilder/blob/master/RELEASE.md#to-build-the-kube-rbac-proxy-images
-
c) Then, we have a PR open where we are adding a new target to allow authors to generate the manager image using multiple platforms: https://github.com/kubernetes-sigs/kubebuilder/pull/2906 (assuming that this one gets merged we will need update the nodeAffinity info to add this one and the new target)
TL'DR: Following the comments about https://github.com/kubernetes-sigs/kubebuilder/issues/2298#issuecomment-1227480907
@jmrodri in SDK we are generate the scorecard and kuttle images with this support. Probabably ansible/helm images. However, the projects are scaffolded with two images/workloads. One is the, kube-rbac-proxy which is managed by kubebuilder (so that it is not providing this support) and the manager image itself is built by the author and we do neither provide a helper so far to allow them to build this one with multiple architecture support.
@camilamacedo86 seems like kube-rbac-proxy image is building for s390x
. This should bring us a step closer? Please let me know how we can resume this work. Thanks!
Closing this one as sorted out.
Thanks @camilamacedo86 - can we expect s390x
binary available in the next release?
It was closed by mistake.
We still missing generate the envtest and CLI binary: More info: https://github.com/kubernetes-sigs/kubebuilder/issues/2298#issuecomment-1242994510
@camilamacedo86 I tried building the tools and it seems to work - I presume this is what is needed by envtest
(?):
# wget https://raw.githubusercontent.com/kubernetes-sigs/kubebuilder/tools-releases/build/thirdparty/linux/Dockerfile
# docker build --build-arg OS=linux --build-arg ARCH=s390x --build-arg KUBERNETES_VERSION=v1.29.0 -t kbld .
# docker run --rm -it kbld sh
/ # ls
bin lib root tmp
dev media run usr
etc mnt sbin var
home opt srv
kubebuilder_linux_s390x.tar.gz proc sys
/ # tar xvfz kubebuilder_linux_s390x.tar.gz
kubebuilder/
kubebuilder/bin/
kubebuilder/bin/etcd
kubebuilder/bin/kubectl
kubebuilder/bin/kube-apiserver
/ # ls -larth kubebuilder/bin/
total 202M
-rwxr-xr-x 1 root root 127.2M Jan 26 13:47 kube-apiserver
-rwxr-xr-x 1 root root 51.4M Jan 26 13:47 kubectl
-rwxr-xr-x 1 root root 23.5M Jan 26 13:47 etcd
drwxr-xr-x 3 root root 17 Jan 26 14:01 ..
drwxr-xr-x 2 root root 55 Jan 26 14:01 .
/ #
Hi @rposts,
The ENV TEST is used to run the tests.
The binaries are configured in your local ENV and executed when you run make test
See: https://github.com/kubernetes-sigs/kubebuilder/blob/93d8fb8edb38682ae7b1fa354e72d759732faa94/testdata/project-v4/Makefile#L171-L174
Then, check some examples of code using ENV TEST to test out the controllers:
- https://github.com/kubernetes-sigs/kubebuilder/blob/master/testdata/project-v4-with-deploy-image/internal/controller/memcached_controller_test.go
I am looking for us to setup the ARCH to build to distributed in: https://storage.googleapis.com/kubebuilder-tools Regarding your comment https://github.com/kubernetes-sigs/kubebuilder/issues/2298#issuecomment-1912118791, yes, it seems that is sorted out in this way. It is a very good approach. We should document that. 🥇
However, I have a really good question. Is your local env Linux s390x? Why do you want to run Kubuilder CLI on those ends? Are you looking for the final solution that works in a server s390x?
@camilamacedo86 I do have a local s390x build environment. It can also be obtained via IBM LinuxONE Community Cloud.
Kubebuilder
is being used by certain projects to run/setup tests (velero for example), and availability of this kubebuilder
on s390x
environment will help with the enablement effort.
Hope this helps. Thanks.
Hi @rposts
I do have a local s390x build environment.
Yes, that is helpful. That is precisely what I want to confirm; I want to ensure that people have local environments running in s390x. It would not make sense to have the tool and ENVTEST targeting it since you could, for example, develop in a Mac targeting an env s390x then; that means that the image built for your Manager would need to support s390x but not the CLI itself.
Thank you for sharing.
Hi @rposts,
We have now the binaries for ENV TEST as well, see:
Also, you have the image auth-proxy image built within already and you added the target arch to the Go releaser so we will have the binary for the next one: https://github.com/kubernetes-sigs/kubebuilder/pull/3741
In this way, we can close this one, Thank you for your help.