ARM64 images built and Available on AWX repository
Please confirm the following
- [X] I agree to follow this project's code of conduct.
- [X] I have checked the current issues for duplicates.
- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.
Feature type
New Feature
Feature Summary
The feature I'm requesting is officially built ARM images. I note that support has been added to build the images yourself, however not an option for a lot of people out there.
Implementation of this proposal requires no additional infrastructure, just some minor changes to the build commands.
Currently docker has buildx. This tool enables cross-platform building of containers for countless architectures. best of all, this occurs within a single build command, docker buildx build --platform=. Using this command instead of the original docker build. cross-compiles for each of the platforms listed and places all images within the same manifest. In addition, you can prepend --push to push all built images and the manifest to the container registry.
I currently use this method to cross-compile all of my container images.
What requires changing
docker build command.
-docker build -t {{ awx_image }}:{{ awx_image_tag }} \
+docker buildx build -t {{ awx_image }}:{{ awx_image_tag }} \
-f {{ dockerfile_name }} \
--build-arg VERSION={{ awx_version }} \
--build-arg SETUPTOOLS_SCM_PRETEND_VERSION={{ awx_version }} \
--build-arg HEADLESS={{ headless }} \
+ --platforms=linux/amd64,linux/arm64
+ --push
.
notes:
--platforms=linux/amd64,linux/arm64this will build both amd64 and arm64 together and place them in a single manifest--pushthis will push everything together to the container registry
Every FROM declaration within dockerfiles
+ARG TARGETPLATFORM=linux/amd64,linux/arm64
-FROM quay.io/centos/centos:stream9 as builder
+FROM --platform=$TARGETPLATFORM quay.io/centos/centos:stream9 as builder
notes:
ARG TARGETPLATFORM=linux/amd64,linux/arm64setsTARGETPLATFORMvariable to have a default value if not specified at runtimeFROM --platform=$TARGETPLATFORMtells docker to use the specified architecture for the container. if this value is omitted from theFROMdeclaration, the build systems architecture is used.
For cross compilation to work packages binfmt-support and qemu-user-static (these are the debian package name). Both together allow the running of binaries of a different architecture. Or you can do the build from a docker container (the method I use), which contains everything required fro the cross-compilation to work. Prior to building you have to activate modules in the kernal for other binaries to run. update-binfmts --enable
I'm not familiar of how the github CI/CD pipelines work, however I am successfully doing cross-compilation within the gitlab ecosystem. their runners are all AMD64. You maybe able to convert this stripped down gitlab ci job. Original here
.build_docker_container:
stage: build
image:
name: nofusscomputing/docker-buildx-qemu:dev
pull_policy: always
services:
- name: docker:23-dind
entrypoint: ["env", "-u", "DOCKER_HOST"]
command: ["dockerd-entrypoint.sh"]
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
DOCKER_DOCKERFILE: dockerfile
# See https://github.com/docker-library/docker/pull/166
DOCKER_TLS_CERTDIR: ""
before_script:
- git submodule foreach git submodule update --init
- if [ "0$JOB_ROOT_DIR" == "0" ]; then ROOT_DIR=gitlab-ci; else ROOT_DIR=$JOB_ROOT_DIR ; fi
- echo "[DEBUG] ROOT_DIR[$ROOT_DIR]"
- docker info
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- pip3 install setuptools wheel
# see: https://gitlab.com/gitlab-org/gitlab-runner/-/merge_requests/1861
# on why this `docker run` is required. without it multiarch support doesnt work.
- docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
- update-binfmts --display
- update-binfmts --enable # Important: Ensures execution of other binary formats is enabled in the kernel
- docker buildx create --driver=docker-container --driver-opt image=moby/buildkit:v0.11.6 --use
- docker buildx inspect --bootstrap
script:
- update-binfmts --display
- |
docker buildx build --platform=$DOCKER_IMAGE_BUILD_TARGET_PLATFORMS . \
--label org.opencontainers.image.created="$(date '+%Y-%m-%d %H:%M:%S%:z')" \
--label org.opencontainers.image.documentation="$CI_PROJECT_URL" \
--label org.opencontainers.image.source="$CI_PROJECT_URL" \
--label org.opencontainers.image.revision="$CI_COMMIT_SHA" \
--push \
--build-arg CI_JOB_TOKEN=$CI_JOB_TOKEN --build-arg CI_PROJECT_ID=$CI_PROJECT_ID --build-arg CI_API_V4_URL=$CI_API_V4_URL \
--file $DOCKER_DOCKERFILE \
--tag $DOCKER_IMAGE_BUILD_REGISTRY/$DOCKER_IMAGE_BUILD_NAME:$DOCKER_IMAGE_BUILD_TAG;
docker buildx imagetools inspect $DOCKER_IMAGE_BUILD_REGISTRY/$DOCKER_IMAGE_BUILD_NAME:$DOCKER_IMAGE_BUILD_TAG;
summary:
- starts a docker container
nofusscomputing/docker-buildx-qemu:devwhere all commands are run from, including build - as the container is dind, links to docker via socket
- for containers that use python packages,
setuptoolsandwheelwheel are required if the package requires compilation update-binfmts --enableenable kernal modulesdocker buildx create --driver=docker-container --driver-opt image=moby/buildkit:v0.11.6 --usesets up buildx to use buildkitdocker buildx build {etc}...all in one command to cross-compile build, manifest creation and push to a container registry .- the final
inspectshows the manifest and containing images.
I'm happy to assist or if ok, start a PR. However the latter I will require someone with Github action knowledge to walk me through adjusting.
Select the relevant components
- [ ] UI
- [ ] API
- [ ] Docs
- [ ] Collection
- [ ] CLI
- [X] Other
Steps to reproduce
.
Current results
.
Sugested feature result
That https://quay.io/repository/ansible/awx contains both amd64 and arm64 images. Yes, I know different repos, same same for the operator and ansible-ee images.
Additional information
No response
whilst we wait https://gitlab.com/nofusscomputing/projects/ansible/awx-arm and for automagic arm builds and https://hub.docker.com/r/nofusscomputing/awx for the location of builds
@jon-nfc awesome work, thanks for this information. A lot of interest around arm64 builds.
I think integrating this into our CI would take some work, and we'd probably need some outside contributors willing to take up that work
Basically someone needs to port the steps you outlined into our GH workflows to build the target image and push to quay
G'day @fosterseth,
@jon-nfc awesome work, thanks for this information. A lot of interest around arm64 builds.
I think integrating this into our CI would take some work, and we'd probably need some outside contributors willing to take up that work
The amount of work is not as much as seems, to make the changes for the build to be multi-arch took no more than an hour (only due to having to learn layout) and from my side getting the gitlab builds to work another 10-15mins work. I'm leaning towards the conversion for github to take around the same. Although as mentioned in OP, I'm not familiar with Gthub CI/CD. I'm happy to raise a PR to conduct the required changes. Although I will require someone with Github CI/CD knowledge to check my work as I will have to learn how to use/implement it. The latter will increase the time to implement the changes. Who's a good POC for this knowledge and to code review the PR?
Basically someone needs to port the steps you outlined into our GH workflows to build the target image and push to quay
On what I've seen so far, the changes are relatively small. Time will only be increased due to having to wait for confirmation the workflows work/complete.
haven't forgotten about this issue, however am going to wait before a PR as the work from the following should be easily portable to this repo as these repos appear to share a similar workflow
- ansible/eda-server-operator#158
- ansible/eda-server-operator#161
- ansible/awx-operator#1681
+1
+1
waiting for this as well
Is this also to fix the built awx-ee images? Those are still failing deployment to an ARM64 cluster. Is there any work needed, to make that work?
+1
Hi @jon-nfc, I'm definitely interested in this PR :)
working on it...
Thank you @TheRealHaoLiu, I removed my custom image, it works well ! (tested with latest AWX Operator version 2.15.0)