volsync
volsync copied to clipboard
arm & arm64 support
Describe the feature you'd like to have.
I would like to be able to run VolSync on arm
and arm64
processors.
I am attempting to use VolSync on a RaspberryPi 4 Cluster (arm), and also on Oracle's Ampere A1 (arm64
), but the manager is failing to run due to it being the wrong architecture. The underlying storage movers have builds supporting these architectures.
What is the value to the end user? (why is it a priority?)
A business justification could be that AWS Graviton uses arm64
binaries and supports running EKS. Also, running on a RaspberryPi is always fun.
How will we know we have a good solution? (acceptance criteria)
The acceptance criteria should be that CI successfully cross builds for the various architectures, and multi-platform docker images are published, allowing VolSync to run on arm
and arm64
processors.
Additional context
Until its official, I have an image built for also arm64
https://hub.docker.com/r/zimbres/volsync/tags
this is a dupe of #574. my image has run fine on Oracle for months, so we might just need CI support
Looks like support for ARM may be coming to GH actions. That would allow us to test and release multi-arch. https://github.blog/changelog/2023-10-30-accelerate-your-ci-cd-with-arm-based-hosted-runners-in-github-actions/
@zimbres can you kindly update it 0.8.0?
Its done.
Tags "latest" and "main" is the main branch. Tag "0.8.0" is release-0.8 branch
I'm noticing that the quay.io/backube/volsync:0.8.0
image is not arm64 compatible.
exec /mover-restic/entry.sh: exec format error
Aka quay.io/backube/volsync@sha256:b0969dce78b900412303153f0761b2233204164572eff1aeebf31707db7e20db
Its done.
Tags "latest" and "main" is the main branch. Tag "0.8.0" is release-0.8 branch
Seems like not quite when trying to use it for restic.
exec /mover-restic/entry.sh: no such file or directory
amd64 and arm64 supported: registry.samipsolutions.fi/library/volsync:0.8.0
I also have a image built and pushed here based on alpine, it only includes rclone and restic but I am open to PRs if people want any other mover.
Container: https://github.com/onedr0p/containers/pkgs/container/volsync Source: https://github.com/onedr0p/containers/tree/main/apps/volsync
Looks like support for ARM may be coming to GH actions. That would allow us to test and release multi-arch. https://github.blog/changelog/2023-10-30-accelerate-your-ci-cd-with-arm-based-hosted-runners-in-github-actions/
@JohnStrunk just as a side note: you don't need a Github Runner on arm to build arm images. I've built many multi-arch container images on GitHub, even for sparc and other crazy environments.
@tuxpeople Do you have a good example you could point me to?
There's also the issue of building vs testing. Today, we test the built containers via the e2e tests in kind. My assumption is we still wouldn't be able to test non-x86-64 images (plz correct me here). I'm wondering what the community thoughts are around that... x64 gets tested, everything else is :man_shrugging:.
@JohnStrunk would it be better to pull binaries out of the official images like this? It would lessen the support burden of maintaining and compiling these tools in this project.
https://github.com/onedr0p/containers/blob/main/apps/volsync/Dockerfile#L21L22
https://github.com/onedr0p/containers/blob/main/apps/volsync/Dockerfile#L47L48
With the matter of s390x support maybe that can be dropped? I'm not sure if there's any users on that platform? It seems like an esoteric platform.
From an upstream VolSync standpoint, yes, it would be easier (assuming they are statically linked, of course). The complication to much of the build process is that we also ship this as a supported Red Hat product, and that comes with requirements for stuff like CVE remediation, FIPS support, and other stuff. We have to trade off simplicity of the upstream build process w/ what is required for the downstream builds in order to comply w/ our internal processes (and not have to do everything twice).
... and we're required to ship s390x :roll_eyes:... I wish we could drop it.
Hi @JohnStrunk
Sorry, it wasn't sparc, I was wrong. Sparc isn't in the list.
@tuxpeople Do you have a good example you could point me to?
Not sure how "good" it is. But here is one:
Definition of platforms: https://github.com/tuxpeople/docker-podsync/blob/b98f4354aa547b69a39a38638edd1ec7408d07ff/.github/workflows/release.yml#L17-L18
Prepare build environment: https://github.com/tuxpeople/docker-podsync/blob/b98f4354aa547b69a39a38638edd1ec7408d07ff/.github/workflows/release.yml#L113-L118
Build: https://github.com/tuxpeople/docker-podsync/blob/b98f4354aa547b69a39a38638edd1ec7408d07ff/.github/workflows/release.yml#L133-L153
This approach supports the following platforms:
"platforms": "linux/amd64,linux/amd64/v2,linux/amd64/v3,linux/arm64,linux/riscv64,linux/ppc64le,linux/s390x,linux/386,linux/mips64le,linux/mips64,linux/arm/v7,linux/arm/v6"
There's also the issue of building vs testing. Today, we test the built containers via the e2e tests in kind. My assumption is we still wouldn't be able to test non-x86-64 images (plz correct me here). I'm wondering what the community thoughts are around that... x64 gets tested, everything else is 🤷♂️.
I do it like that: I test the x86_64 container and assume that other platforms are working as well. I understand that this may be insufficient for you. I don't know if it may be possible using qemu and an emulated environment or if you would have to wait for arm runners for E2E testing.
Edit:
I've no idea about your tests, but if docker run
works for you, please see this as a (potential?) solution: https://github.com/orgs/community/discussions/38728#discussioncomment-6324428