git-cliff
git-cliff copied to clipboard
feat(docker): use matrix for building multi-platform Docker images
Description
Motivation and Context
Docker builds are hella slow. This might be a way to speed them up.
See #879
How Has This Been Tested?
Not tested yet.
Types of Changes
- [ ] Bug fix (non-breaking change which fixes an issue)
- [x] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to change)
- [ ] Documentation (no code change)
- [x] Refactor (refactoring production code)
- [ ] Other
Checklist:
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 40.10%. Comparing base (
8b7c200) to head (695de60). Report is 4 commits behind head on main.
Additional details and impacted files
@@ Coverage Diff @@
## main #888 +/- ##
==========================================
+ Coverage 40.04% 40.10% +0.06%
==========================================
Files 21 21
Lines 1671 1671
==========================================
+ Hits 669 670 +1
+ Misses 1002 1001 -1
| Flag | Coverage Δ | |
|---|---|---|
| unit-tests | 40.10% <ø> (+0.06%) |
:arrow_up: |
Flags with carried forward coverage won't be shown. Click here to find out more.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Hey!
I already mentioned this in the issue that causes this PR, but i guess for the sake of completeness i will leave this here as well. Afaik this can be done with using a native arm64 build node. Sadly those are only planned to be available for OSS Project by the end of the year. See here.
Now we hit another thing: https://github.com/orhun/git-cliff/actions/runs/11075086138/job/30775267343
Now we hit another thing: https://github.com/orhun/git-cliff/actions/runs/11075086138/job/30775267343
https://github.com/docker/setup-buildx-action/issues/176
:/
it looks like we can use colima https://github.com/actions/runner-images/issues/6216
it looks like we can use colima actions/runner-images#6216
The last two comments mention it is (or at least was) not available on the macos-13 and -14 runners. So i suspect this has to be manually installed. This looks like it is not available on arm based macos nodes.
Edit: I am currently experimenting with bringing colima to live... so far no luck. See here
My current understanding is, that this has to wait until GitHub launches their linux based arm64 runners. My investigations lead me to the following findings:
- https://github.com/crazy-max/ghaction-setup-docker --> not supported on arm64
- colima --> does not work because arm64 misses the Hypervisor (https://github.com/abiosoft/colima/issues/1023)
- https://github.com/marketplace/actions/setup-docker --> Emits a warning that it is only available on x86_64
- macos runner images have no working docker shipped with them due to license issues --> (https://github.com/actions/runner-images/issues/2150)
This comes to the same conclusion.
Another solution would be to somehow split the work that is done in the Dockerfile (basically the compilation) into multiple build stages, and only build one stage in one build step and then use the result in the next build step... but this feels needlessly complex. Or what might work, though it is a little bit prone to dependency issues: build outside of docker and only copy the binaries into the docker image during the build.
I really appreciate you investing time into this and making everything clear! @LtdSauce - sadly it is an unfortunate situtation.
Another solution would be to somehow split the work that is done in the Dockerfile (basically the compilation) into multiple build stages, and only build one stage in one build step and then use the result in the next build step... but this feels needlessly complex.
Can't say that I fully understood it, but yeah, sounds complex for sure.
Or what might work, though it is a little bit prone to dependency issues: build outside of docker and only copy the binaries into the docker image during the build.
It might work but sounds too hacky :/
Splitting dockerfile will not help as the issue in the build is only triggered in cargo build (compilation)
Copying the binary could be a solution for now (a hacky one for sure) but we would needs two differents dockerfile as user wants easy build and we don't want to ask them "build git-cliff localy and then build the dockerfile" (if they want to build it themselves
Hey,
Can't say that I fully understood it, but yeah, sounds complex for sure.
I kept thinking about my idea. Maybe the following explains it a little better.
Disclaimer: i have not enough knowledge about how to split up compilations in cargo. But if i get it right, than it might be possible to do the following as i think i found 3 compilation targets in the project. If i am wrong and there is no way to build things seperately, then please just ignore my comment. Otherwise maybe the following might work:
First Adjust the Dockerfile:
Instead of doing a full build in https://github.com/orhun/git-cliff/blob/bfe9beb093df42badf762945cf04548fbe75949a/Dockerfile#L14 you could split that up into several build commands and put each in its own docker build stage. E.g. something like this:
FROM builder AS lib-builder
RUN cargo build --release --locked --no-default-features --features github --features gitlab --features bitbucke --lib
FROM lib-builder AS bin-builder
RUN cargo build --release --locked --no-default-features --features github --features gitlab --features bitbucke --bin
And then in the runner stage use --from=bin-builder instead of builder.
Is it possible to build just the dependencies alone? If so that could be another stage before doing the lib-builder stage. Also as there are 2 binaries in the workspace, those could be build in their own stage as well.
This change is not visible to the user of the Dockerfile if one just runs docker build.
Second split the workflow into multiple jobs:
After this something like https://github.com/djbender/github-actions-docker-caching-example/blob/main/.github/workflows/main.yml could be applied to cache each of the above layers in one build job. So basically have one build job for each layer and let them run in sequence. This is necessary as a build job needs to terminate after 6h... splitting the stages over the steps in one job would be easier but is not possible here.
Stripped down docker.yml to illustrate it:
jobs:
lib:
- name: Build
id: docker_build
uses: docker/build-push-action@v6
with:
context: ./
file: ./Dockerfile
target: lib-builder
push: False
bin:
- name: Build
id: docker_build
uses: docker/build-push-action@v6
with:
context: ./
file: ./Dockerfile
target: bin-builder
push: False
publish:
- name: Ppush
id: docker_build
uses: docker/build-push-action@v6
with:
context: ./
file: ./Dockerfile
target: runner
push: True
Additionally the caching from the above mentioned link has to be used.
I have no time to try this out this week though :/ But feel free to adapt my idea if you think it is worth it. (I'll try it out as soon as i have some spare time to fiddle around with this idea.)
And this can only work as long as the individual cargo build steps do not take more then 6h each.
I think you are onto something there... it might work. We should also check out cargo-chef to see if someone else did something similar.
I'm a bit burnt out by these Docker issues recently, feel free to try it out whenever you have time! I might come back to it next week :)
FTR: I left a comment in the issue that caused this PR as i am now quite certain, that the actual issue lies in the usage of cargo-chef. If the proposed solution there is not sufficient to bring the arm64 builds back, then it could be combined with what i proposed here. But instead of doing multiple cargo build jobs, it should be enough to put the cargo chef cook in one job and the cargo build in another one... then both can take up to 6h to complete.