wastebin
wastebin copied to clipboard
support multi-platform docker image
I try to run quxfoo/wastebin
in Raspberry Pi, but i got standard_init_linux.go:219: exec user process caused: exec format error
.
Docker multi-platform image: https://docs.docker.com/build/building/multi-platform/
I'll take care after 9th of June unless you're willing to open a PR?
Zhenzhen Zhao @.***> schrieb am Do., 1. Juni 2023, 14:27:
I try to run quxfoo/wastebin in Raspberry Pi, but i got standard_init_linux.go:219: exec user process caused: exec format error.
Docker multi-platform image: https://docs.docker.com/build/building/multi-platform/
— Reply to this email directly, view it on GitHub https://github.com/matze/wastebin/issues/29, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAA4ERR66RNCXGJZSZVU7J3XJCDBPANCNFSM6AAAAAAYW3BMQM . You are receiving this because you are subscribed to this thread.Message ID: @.***>
Any news?
Ah sorry, will look into it. Unless some of you have incentive to come up with a PR. Contributions are welcome.
I haven't found a simple solution yet to create multi-arch container images using docker buildx
for my projects when cross compiling. I would have expected that the rust
base image would work fine in this condition, but it does not. I currently only run amd64 and aarch64 machines, so I tried to use two different images based on the TARGETPLATFORM
. My solution works consistently, but it looks a bit weird. Maybe it's a starting point for you. My dockerfile looks like:
FROM --platform=$BUILDPLATFORM rust:latest AS base-amd64
FROM --platform=$BUILDPLATFORM messense/rust-musl-cross:aarch64-musl AS base-arm64
# This was the only way I could find how to choose entirely different images
FROM --platform=$BUILDPLATFORM base-$TARGETARCH AS builder
ARG TARGETPLATFORM
# Select the toolchain based on the TARGETPLATFORM. This is just a temporary file. I have it in gitignore and in my cleanup step in my makefile
RUN case "$TARGETPLATFORM" in \
"linux/amd64") echo x86_64-unknown-linux-musl > /rust_target.txt ;; \
"linux/arm64v8") echo aarch64-unknown-linux-musl > /rust_target.txt ;; \
"linux/arm64") echo aarch64-unknown-linux-musl > /rust_target.txt ;; \
*) exit 1 ;; \
esac
RUN rustup target add "$(cat /rust_target.txt)"
RUN apt-get update && apt-get install --no-install-recommends -y musl-tools musl-dev
RUN update-ca-certificates
# Create my-project user
ENV USER=my-project
ENV UID=10001
RUN adduser \
--disabled-password \
--gecos "" \
--home "/nonexistent" \
--shell "/sbin/nologin" \
--no-create-home \
--uid "${UID}" \
"${USER}"
WORKDIR /my-project
COPY .cargo ./.cargo
COPY Cargo.toml Cargo.lock ./
COPY src ./src
RUN cargo build --release --target "$(cat /rust_target.txt)"
RUN cp "target/$(cat /rust_target.txt)/release/my-project" .
###############################################################################
FROM scratch
# --platform=$TARGETPLATFORM
ARG GIT_COMMIT=unspecified
ARG BUILD_DATE=unspecified
ARG AUTHORS=unspecified
ARG LICENSES=unspecified
LABEL org.opencontainers.image.revision="$GIT_COMMIT"
LABEL org.opencontainers.image.created="$BUILD_DATE"
LABEL org.opencontainers.image.authors="$AUTHORS"
LABEL org.opencontainers.image.licenses="$LICENSES"
# Import from builder.
COPY --from=builder /etc/passwd /etc/passwd
COPY --from=builder /etc/group /etc/group
WORKDIR /my-project
# Copy our build
COPY --from=builder /my-project/my-project ./
# Use an unprivileged user.
USER my-project:my-project
I build images and deploy stuff using a self-hosted Woodpecker CI instance because I don't want to store secrets on GH servers. Therefore, I haven't looked into GH actions which build container images. There is probably a buildx action somewhere.
If I'm building locally, just to try something out or so, I use a Makefile. The image related stuff looks somewhat like this snippet:
ROOT_DIR := $(realpath $(dir $(realpath $(lastword $(MAKEFILE_LIST)))))
LICENSES := MIT
GIT_COMMIT := $(shell git rev-parse --short HEAD)
BUILD_DATE := $(shell date --rfc-3339=seconds)
AUTHORS ?= $(shell git config user.name) <$(shell git config user.email)>
DOCKER_REGISTRY ?= $(DOCKER_REGISTRY)
DOCKER_USERNAME ?= $(shell whoami)
DOCKER_TAG ?= ${GIT_COMMIT}-dev
DOCKER_BUILDER ?= mybuilder
APPLICATION_NAME ?= ${ROOT_DIR}
c-builder-create: # Create a new buildx builder "mybuilder"
docker buildx create \
--name ${DOCKER_BUILDER} \
--bootstrap\
--use
c-build: # Build the images
docker buildx build \
--provenance false \
--platform linux/arm64,linux/amd64 \
--build-arg GIT_COMMIT="${GIT_COMMIT}" \
--build-arg BUILD_DATE="${BUILD_DATE}" \
--build-arg AUTHORS="${AUTHORS}" \
--build-arg LICENSES="${LICENSES}" \
--tag ${DOCKER_REGISTRY}/${DOCKER_USERNAME}/${APPLICATION_NAME}:${DOCKER_TAG} \
.
c-release: # Build the images and push them to a registry (in this case a private one)
docker buildx build \
--provenance false \
--platform linux/arm64,linux/amd64 \
--build-arg GIT_COMMIT="${GIT_COMMIT}" \
--build-arg BUILD_DATE="${BUILD_DATE}" \
--build-arg AUTHORS="${AUTHORS}" \
--build-arg LICENSES="${LICENSES}" \
--tag ${DOCKER_REGISTRY}/${DOCKER_USERNAME}/${APPLICATION_NAME}:${DOCKER_TAG} \
--push \
.
c-registry-inspect: # Inspect the image metadata at the registry
docker buildx imagetools inspect ${DOCKER_REGISTRY}/${DOCKER_USERNAME}/${APPLICATION_NAME}:${DOCKER_TAG}
One note: If you don't use labels, you can omit all the "build-arg"s. I just have them in there because I use them in CI.
Hello, I've recently set this up in my local Kubernetes cluster based on ARM platform, and had to cross-compile it in order to get it to work.
It's possible to do it with the same Dockerfile, although it needs some adjustments, biggest one being using Ubuntu (or something similar) instead of scratch
.
I couldn't get it to work with scratch
, not sure if some shared library is needed, or if it simply doesn't behave properly on ARM architecture.
If this is something you'd like to avoid, and keep using scratch
for x86_64
, other option is using separate Dockerfile for ARM build.
If it sounds useful, or something you'd like to add to this project, let me know and I can open a PR with my changes.
Also, I used Podman for all of my testing, in case you want to proceed with this I'll test it with Docker as well.
I'd like to avoid anything larger than scratch
for x86_64, so please go ahead with a separate Dockerfile. Although I don't see why a scratch
image shouldn't work in case of ARM.
Neither do I, but this error is returned, doesn't happen with x86_64:
exec /app/wastebin: no such file or directory
It's probably some library missing, I'll investigate some more.
I have it running using scratch
and there's a way to build it for multiple architectures using single Dockerfile, but it seems much cleaner to have it in a separate file, so I'll proceed that way.
I'll open a PR once I'm done testing it using Docker.
@matze From what I can see there's no GitHub Action nor any other pipeline in this repository which builds and pushes images to DockerHub, so I assume you do this manually.
If that's the case, here's step-by-step guide on how you can use Podman to create multi-arch manifest and push image to Docker Hub.
I'm using internal Docker registry running in my Kubernetes cluster, but principle should be the same, tag in this example is v2.4.4
, domain of my local registry is registry.at.home
, and branch I'm using to build images is current master
.
- Create manifest:
$ podman manifest create wastebin:v2.4.4
- Build
x86_64
image from root directory of this repository:
$ podman build --platform linux/amd64 --manifest localhost/wastebin:v2.4.4 -f Dockerfile
- Build
arm64
image from root directory of this repository:
$ podman build --platform linux/arm64 --manifest localhost/wastebin:v2.4.4 -f Dockerfile.arm
- After builds are done, check if manifests contains both platforms, it should look something like this:
$ podman manifest inspect --verbose localhost/wastebin:v2.4.4
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
"manifests": [
{
"mediaType": "application/vnd.oci.image.manifest.v1+json",
"size": 800,
"digest": "sha256:f02c775c193ab9de5b7d68cb4e71ef2ab8bb9c852f603dd0e71d015f622725ce",
"platform": {
"architecture": "amd64",
"os": "linux"
}
},
{
"mediaType": "application/vnd.oci.image.manifest.v1+json",
"size": 1408,
"digest": "sha256:3f8289518e56f9e666d51b34b66e67311671eb1c4fef4463020d6a3b21a06f08",
"platform": {
"architecture": "arm64",
"os": "linux"
}
}
]
}
- Push manifest to Docker Hub (in my case my internal Docker registry):
$ podman manifest push localhost/wastebin:v2.4.4 registry.at.home/wastebin:v2.4.4
- Inspect remote manifest:
$ podman manifest inspect --verbose registry.at.home/wastebin:v2.4.4
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
"manifests": [
{
"mediaType": "application/vnd.oci.image.manifest.v1+json",
"size": 812,
"digest": "sha256:243843ed744fadcf208f75caf692075fd1f2314f1ece6a7c515522d006cc6a64",
"platform": {
"architecture": "amd64",
"os": "linux"
}
},
{
"mediaType": "application/vnd.oci.image.manifest.v1+json",
"size": 1438,
"digest": "sha256:07d06f364cdc0da2373ef17d758a87d2c1a480f36481fe16c0cac426ba0c4b61",
"platform": {
"architecture": "arm64",
"os": "linux"
}
}
]
}
- Testing in my local ARM based cluster:
$ kubectl --context <relevant-context> -n wastebin logs wastebin-d678cdbb6-mm49m
2024-07-12T23:45:00.564484Z INFO rusqlite_migration: Database migrated to version 6
Relevant deployment part looks like this:
...
spec:
containers:
- image: registry.at.home/wastebin:v2.4.4
imagePullPolicy: IfNotPresent
...
- Testing on
x86_64
machine:
$ podman run -p 8088:8088 -it registry.at.home/wastebin:v2.4.4
2024-07-13T00:00:57.533325Z INFO rusqlite_migration: Database migrated to version 6
If this looks good, I'll do the same with Docker, in case that's how you prefer to build the images.
Once/if you're willing to tackle this, and after multi-arch image is available in Docker Hub repository, let me know and I can test it on ARM hardware.