freenet-core
freenet-core copied to clipboard
CI takes over 20 minutes
https://github.com/freenet/locutus/actions/runs/3073961505/jobs/4966424064
Looks like most of it is the Build step.
Per @iduartgomez:
- [ ] Switch to Mold linker
- [ ] Upgrade to get better CI instances
Free tier github runners have 1-core and are slow, that's why the build process takes so long, since rustc parallelizes at the crate level.
Must take a look at the caching we are using etc. too
Ah, interesting. I'll investigate an upgrade.
An other thing we must do is use an other linker than the default gcc one, using lld or even better mold should speed up the linking steep quite a lot. Specially if we have access to a multicore machine with mold.
I've heard good things about mold.
I think we are not using caching of artifacts properly:
maybe not pointing to the right directory etc.
We need to look at this.
I played with mold a bit. System: (Linux 5.10.0-18-amd64 Debian 5.10.140-1 (2022-09-02) x86_64 GNU/Linux) Hardware: HP EliteBook 850 G8, 11th Gen Intel(R) Core(TM) i7-1185G7 @ 3.00GHz, 32gb RAM Commit(change with building rocksdb from scratch): https://github.com/freenet/locutus/pull/460
Commands: cargo clean cargo build --all-features // checks whether mold actually used or not readelf -p .comment target/debug/locutus-node
Results
without mold: Finished dev [unoptimized + debuginfo] target(s) in 11m 41s
with mold: Finished dev [unoptimized + debuginfo] target(s) in 11m 29s
After several runs sometimes it's even faster without mold. So overall I'd say that +- 10 secs in both directions -> mold doesn't help(as I see on Linux at least)
Yes, the main problem is we rebuild all crates in almost all builds, so we are not making good use of build caching. Default Github runners have 1 core and ain't the fastest so it takes a while to build all the dependencies. The last linking steep not taking that long as you saw...
If we can improve/fix build caching, it should get much better.
Is this a simple matter of doing something like this? :
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Cache cargo registry
id: cache-cargo-registry
uses: actions/cache@v2
with:
path: ~/.cargo
key: cargo-registry-${{ hashFiles('Cargo.lock') }}
restore-keys: cargo-registry-
- name: Build
run: cargo build --release
- name: Cache cargo registry
if: always()
uses: actions/cache@v2
with:
path: ~/.cargo
key: cargo-registry-${{ hashFiles('Cargo.lock') }}
Is this a simple matter of doing something like this? :
steps: - name: Checkout code uses: actions/checkout@v2 - name: Cache cargo registry id: cache-cargo-registry uses: actions/cache@v2 with: path: ~/.cargo key: cargo-registry-${{ hashFiles('Cargo.lock') }} restore-keys: cargo-registry- - name: Build run: cargo build --release - name: Cache cargo registry if: always() uses: actions/cache@v2 with: path: ~/.cargo key: cargo-registry-${{ hashFiles('Cargo.lock') }}
Yes, just checked on the project where I can control github actions ci and it works.
So I think we can integrate that. But in order to correctly integrate that probably someone who has rights to actions manipulations should do that(cc @iduartgomez). So it will be easier to fix a problem if something goes wrong.
Nevertheless, If creating a PR will be helpful I can do that.
@kakoc I think a pull request would be great if @iduartgomez can confirm, perhaps there is a reason this approach wasn't used before.
Worth a shoot, let's try out with a PR.
This has been greatly improved with the changes done in the messaging app PR: https://github.com/freenet/locutus/actions/runs/5724414933/job/15510859787?pr=658
will take an other look if necessary in the future, but the next would probably be using a multiprocessor machine for CI