heads
heads copied to clipboard
CircleCI: add layer2 cache of newly added coreboot git forks to speedup builds from cache
Master doesn't currently caches nor reuses coreboot's git forks build dir cache (layer2) but if no modules config changed (layer3).
So cache coreboot git dirs so that crossgcc toolchain can be built once and reuse when cache layer3 is invalidated by other modules having changed bu coreboot hasn't to speed up builds from cache of coreboot build dir which should be reused to only compile new modules having changed.
As of today, cache layer 3 (only scripts having changed) encompassed all build dirs and was reused.
Reminder
- cache layer 1: always reuse musl-cross-make measuring modules/musl file hash for reusal
- cache layer 2: coreboot build cache + musl-cross-make measuring modules/coreboot+modules/musl for reusal
- cache layer 3: all build dir cache measuring Makefile and modules files for reusal
Not sure about matrix bridge config for github. As of now, someone needs to put as draft and back to ready for review to have message from integration posted into channel to not spam (pull request creation is not publicised). But that means only reviewed and ready for review->draft-ready for review are posted. Testing.
@tlaurion I thought last time we tried this, the problem was that we were going over a size limit, or saving the cache itself took too long, or something like that. (But maybe I recall incorrectly.) If this does save time and doesn't hit any limits, I'm all for it.
Currently though, I can't see from the available pipelines how long it takes to save/restore or what the cache sizes are:
- The first pipeline didn't save any caches, keys already existed: https://app.circleci.com/pipelines/github/tlaurion/heads/2174/workflows/b7fa3e97-2a39-435a-af36-69eb7c475e6d/jobs/37981
- I don't see a subsequent build to see how long it would take to restore the cache (there's another build linked above, but it seems older)?
Could you get a pipeline through that saves the caches and then one that restores it?
Changing cache name
Starting build clean
It only got coreboot-4.19 and coreboot-nitrokey:
Warning: could not archive /root/project/build/x86/coreboot-4.11 - Not found
Warning: could not archive /root/project/build/x86/coreboot-4.13 - Not found
Warning: could not archive /root/project/build/x86/coreboot-4.14 - Not found
Warning: could not archive /root/project/build/x86/coreboot-4.15 - Not found
Warning: could not archive /root/project/build/x86/coreboot-4.17 - Not found
Warning: could not archive /root/project/build/x86/coreboot-dasharo-kgpe-d16 - Not found
Warning: could not archive /root/project/build/x86/coreboot-purism - Not found
The cache job does not appear to be downstream of any Librem boards using coreboot-purism. KGPE-D16 isn't built in CI at all currently. (I don't mind leaving the other keys in the hopes of fixing that, they don't really harm anything, but coreboot-purism was intended to be cached by the PR.)
Layer 2 cache went from 3.3 GB to 5.0 GB. Layer 3 is already 7.8 GB so that itself is fine, but moving the cache job downstream of coreboot-purism will probably increase layer 3 by 1-2 GB as well.
@tlaurion Do you want it to cache coreboot-purism or leave it as-is?
Given that Layer 3 is already 7.8 GB, the increase in layer 2 is probably fine (not sure it's a good use of scarce time to trigger another job to see how long it takes to download). But if Layer 3 increases to ~10 GB we might want to see how that impacts build time.
For reference, latest build: https://app.circleci.com/pipelines/github/tlaurion/heads/2174/workflows/03401bf0-276f-4f51-9a05-8880daab7d0c/jobs/38209 Last time master generated layer 2 cache: https://app.circleci.com/pipelines/github/linuxboot/heads/711/workflows/5685c1d7-cb4e-4da7-ba6a-e2c5ee682633/jobs/14195
@tlaurion I thought last time we tried this, the problem was that we were going over a size limit, or saving the cache itself took too long, or something like that. (But maybe I recall incorrectly.) If this does save time and doesn't hit any limits, I'm all for it.
It only got coreboot-4.19 and coreboot-nitrokey:
Warning: could not archive /root/project/build/x86/coreboot-4.11 - Not found Warning: could not archive /root/project/build/x86/coreboot-4.13 - Not found Warning: could not archive /root/project/build/x86/coreboot-4.14 - Not found Warning: could not archive /root/project/build/x86/coreboot-4.15 - Not found Warning: could not archive /root/project/build/x86/coreboot-4.17 - Not found Warning: could not archive /root/project/build/x86/coreboot-dasharo-kgpe-d16 - Not found Warning: could not archive /root/project/build/x86/coreboot-purism - Not found
https://app.circleci.com/pipelines/github/tlaurion/heads/2174/workflows/03401bf0-276f-4f51-9a05-8880daab7d0c
So as of now, consider picture + OP explanation for the 3 layers of workspace cache on a clean build (workspace caches can overwrite previous layer cache content when passed to next layer) and where save_cache cannot combine cache content otherwise failing(content of combined cache needs to be exclusive, which is why only different architecture caches (/build/x86 /build/ppc64) can be combined):
- x230-hotp-maximized is 4.19 workspace cache layer 2 for coreboot, talos-2 is coreboot dasharo git fork workspace cache layer 2 for coreboot
- nitropad-nv41 is coreboot dasharo novacustom git fork workspace cache layer 2 for coreboot
- save_cache combines previous workspace caches and save layer 1 2 and 3 caches
- librem_14 is coreboot purism git fork worspace cache layer 2 for coreboot and reuses build modules of x230-hotp-maximized
- this is not part of save_cache. Consequently, it is not restored on prep_step and coreboot toolchain is rebuild at each build, clean or not, which other modules cache is reused.
- nitropad-nv41 is coreboot dasharo novacustom git fork workspace cache layer 2 for coreboot
Previous discussions, if my memory is good, thought of changing layers dependencies so that
- x230-hotp-maximized and talos-2 are layer 2 workspace caches and passed to other layers
- librem_14 and nitropad-nv41 are swapped, letting librem_14 cahce being part of save_cache layer 1-2-3 restored at prep_step
That way, all purism boards should benefit of the cache save/restore (6 other purism librems boards reusing the cache vs 1 other nitrokey board reusing cache as currently)
@JonathonHall-Purism I'll launch a rebuild with cache now, then change CACHE_VERSION to today's date and rebuild clean after having implemented hierarchy change above in circleci config).
There must be something better to do with the caches but even if I digged in orhter github projects using CircleCI, there doesn't seem to be a lot of other projects on free tier doing massive caches like Heads do, and therefore nobody complained or seemed to have done something better. Wish I had a CircleCI specialist on-hand here).
Partly made under https://github.com/linuxboot/heads/pull/1604/files#diff-78a8a19706dbd2a4425dd72bdab0502ed7a2cef16365ab7030a5a0588927bf47R196-R199 for #1604
Finished being par with this PR under #1604's 7fe2f9d