Add multi-architecture (amd64 & arm64) builds
Changes
Build System
tools/container-build.sh: Removed forced amd64 cross-compilation on ARM hosts. Builds now run natively for host architecture (amd64 or arm64). Override withDOCKER_DEFAULT_PLATFORM.tools/make-deb.sh: Package architecture now determined byARCHenvironment variable.Containerfile: AddedTARGETPLATFORM,BUILDER_BASE, andFINAL_BASEbuild args for cross-platform Go downloads and configurable base images.
CI/CD Workflows
release.yml: Split into 3 jobs (build-artifacts, create-release, push-images). Matrix builds amd64 onubuntu-24.04and arm64 onubuntu-24.04-arm. Creates multi-platform manifest for GHCR images.try-release.yml: Added matrix to test both architectures..dockerignore/.gitignore: Added.githubdirectory and build artifacts.
Versioning
- Changed version scheme from
${GO_VERSION}.$(date +%s)to${GO_VERSION}.${COMMIT_TIMESTAMP}for reproducible builds.
Image Tags
- Repository owner now uses
${{ github.repository_owner }}instead of hardcodedletsencrypt. - Architecture-specific tags:
boulder:${VERSION}-amd64,boulder:${VERSION}-arm64 - Generic tags preserved:
boulder:${VERSION},boulder
Artifacts
.debpackages:boulder-${VERSION}-${COMMIT_ID}.amd64.deb,.arm64.deb- Tarballs:
boulder-${VERSION}-${COMMIT_ID}.amd64.tar.gz,.arm64.tar.gz - Container images: Multi-platform manifest with both architectures
Testing
# Native build (amd64 or arm64)
GO_VERSION=1.24.6 ./tools/container-build.sh
# Force specific architecture
DOCKER_DEFAULT_PLATFORM=linux/amd64 GO_VERSION=1.24.6 ./tools/container-build.sh
This PR introduces a breaking change in artifact naming that we should discuss:
Current Change:
- Before:
boulder-1.25.0.xxx-commit.x86_64.tar.gzandboulder-1.25.0.xxx-commit.x86_64.deb - After:
boulder-1.25.0.xxx-commit.amd64.tar.gzandboulder-1.25.0.xxx-commit.amd64.deb
The Question:
Should we maintain backward compatibility by keeping x86_64 naming for AMD64 artifacts?
Considerations:
Arguments for standardized naming (amd64):
- Consistent with Docker/Debian conventions
- Cleaner, more predictable naming scheme
Arguments for backward compatibility (x86_64):
- Won't break existing CI/CD pipelines
- Won't break download scripts expecting current names
Potential Impact:
- Any automation that downloads artifacts by name
- CI/CD systems that expect specific filename patterns
- Documentation referencing artifact names
Implementation Options:
- Keep current PR as-is (breaking change, but cleaner)
- Preserve
x86_64naming for AMD64 while usingarm64for ARM - Add both naming schemes temporarily with deprecation timeline
What's your preference? Will any existing systems be impacted by this naming change?
Between this and #8386, whichever merges last should be modified to (ideally) handle uploading both architectures' images when tagging a release, or at least make sure builds are in amd64 as a temporary quick fix.
#8386 just merged, so if possible, let's get that into this PR. Sorry about the hassle!
@jprenken et al I think this last set of changes should address the feedback and adds full multi-arch image builds in ghcr.io.
The comments in the try-release and release workflows indicate a single Go version is used for release and multiple versions are possible for try-release. Is this still correct? I would like to externalize the GO_VERSION to simplify the workflows.
If both release and try-release will only need one version, either:
- Read the version specified in
go.mod. OR - Use the convention of a
.go-versionfile in the repo root with contents like:
1.25.0
If one or both of the workflows depend on having multiple versions:
- Use a
.github/go-versions.jsonfile with contents like:
{
"versions": ["1.25.0", "1.24.6"]
}
The release workflow can enforce a single version by choosing only the first entry if desired.
The comments in the try-release and release workflows indicate a single Go version is used for release and multiple versions are possible for try-release. Is this still correct? I would like to externalize the GO_VERSION to simplify the workflows.
Yes, its important that the try-release build target multiple Go versions. We've had this breakage in the past (i.e. we were testing CI for an upcoming version of Go, but when we went to make that the default, the release build was broken). We've even had situations in which we want the real release build to be producing multiple versions, because we aren't sure whether prod is going to update to the new go version before or after the new boulder version is deployed, or because RVAs are running a different go version from on-prem services.
If both release and try-release will only need one version, either:
- Read the version specified in
go.mod.
Even if one version were acceptable, this would be a semantically messy solution. The version indicated in go.mod should only be updated when the minimum version of the stdlib required by the project's code moves forward; i.e. it should represent a required minimum version, not a target version.
- Use a
.github/go-versions.jsonfile with contents like:{ "versions": ["1.25.0", "1.24.6"] }
Is this technique -- externalizing a workflow's matrix to a data file -- widely used? Is it natively supported by github actions, or would we need to add a bunch of workflow steps to read this file? If there's not native support, doesn't that mean that we'd have to bundle basically the whole github action into a script that can loop over the values in that data file, rather than using the built-in "matrix" support to do all the steps multiple times?
@aarongable thanks for the feedback! I understand your concerns and will move the discussion of Go build target versioning to a new issue. This PR already accomplishes the multi-architecture build goal and I don't want to hold it up on this tangential point.
I fixed the merge conflicts and multi-arch builds are working. Ready for a final review.
Hi @sheurich,
It's been a while since I looked at this PR and I realize it's grown a lot bigger since I last looked at it! Some of the things, like adding qemu for our release builds, looks like it will make our builds slower and more complex.
The original PR description said "This enables efficient local development and lays the foundation for future parallel multi-architecture CI builds." The current version shows us those multi-architecture CI builds, and I'm thinking the tradeoff for us having those in the main Boulder repo is probably not worth it.
Am I right in assuming you're deploying arm64 releases to prod? If so, would it be reasonable to build those in CI in your own fork of Boulder?
Hey @sheurich, we haven’t heard back in a while, so we’re going to close this pull request. Feel free to reopen it if it is still or becomes relevant. Thanks!