containerized-data-importer
containerized-data-importer copied to clipboard
cdi-bazel-builder: exec user process caused "exec format error"
What happened:
When i build in aarch64 environment, i exec 'BUILD_ARCH=aarch64 make bazel-build' command, it occurs error:
- docker run -v kubevirt-cdi-volume:/root:rw,z --security-opt label:disable --rm --entrypoint /entrypoint-bazel.sh quay.io/kubevirt/kubevirt-cdi-bazel-builder:0.0.13 mkdir -p /root/go/src/kubevirt.io/containerized-data-importer/_out standard_init_linux.go:207: exec user process caused "exec format error"
I can't get quay.io/kubevirt/kubevirt-cdi-bazel-builder:0.0.13 image for aarch64, in addition, When i exec './hack/build/bazel-build-builder.sh', it can generate cdi-bazel-builder image for amd64, but not support aarch64
What you expected to happen: I can build cdi in aarch64 environment
How to reproduce it (as minimally and precisely as possible):
- clone code
- exec 'BUILD_ARCH=aarch64 make bazel-build'
Environment:
- CDI version (use
kubectl get deployments cdi-deployment -o yaml
): v1.42.0 - Kubernetes version (use
kubectl version
): k3s version v1.22.7+k3s1 (8432d7f2) - DV specification: N/A
- Cloud provider or hardware configuration: N/A
- OS (e.g. from /etc/os-release): Kylin 4.0.2
- Kernel (e.g.
uname -a
): Linux 4.4.131-20200704.kylin.server-generic aarch64 - Install tools: N/A
- Others: N/A
So I wrote this commit but I'd rather submit it as a PR after #1983 gets merged, as it touches much of the same code.
After this we should figure out how to docker build
the builder for arm64 (which we might be able to do by running the post-submit job on an arm64 server too)
with this commit we build an arm64 builder.
blocked on #1983, kubevirt/project-infra#2000
I can across-build aarch64 images successfully, but i just want to try build aarch64 natively, at present, there are some problems.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
/close
@kubevirt-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen
. Mark the issue as fresh with/remove-lifecycle rotten
./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen by coupling podman & buildah our builder images haven't been multiple architectures. I'm fixing this.
@maya-r: Reopened this issue.
In response to this:
/reopen by coupling podman & buildah our builder images haven't been multiple architectures. I'm fixing this.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
/close
@kubevirt-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen
. Mark the issue as fresh with/remove-lifecycle rotten
./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.