gutenberg-mobile
gutenberg-mobile copied to clipboard
Implement cache strategies to speed up CI jobs
Is your feature request related to a problem? Please describe.
Currently, the total time for running the CI jobs of a Git tag is around 1h 10m. This is the time we need to wait when integrating Gutenberg Mobile into the main apps. It would be desirable to reduce the duration in the spirit of speeding up the development workflow, as well as the release process.
Describe the solution you'd like
The following steps are repeated across the CI jobs, so seem good candidates to cache them in order to reduce the duration:
Preparing working directory
- It takes around 4m without cache, and 1m 30s with cache.
This job is in charge of checking out the repositories, including the submodules. Seems we are already caching the GBM repository (reference), although it's not applied in some of the jobs like Lint and Unit Tests.
Setup Node environment
- It takes around 20s.
Installs nvm (Node version manager) and node. The job's duration is fairly low but we often experience errors in the download process that make the entire job fail. The node version is rarely updated, so by caching it we could avoid this common error and improve the job's stability.
Install Node dependencies
- It takes around 6m 30s.
Installs all JS dependencies defined in package.json files. We could cache it by checking the hash of package-lock.json files.
Describe alternatives you've considered An alternative would be to unify the jobs so we don't need to execute the above steps multiple times. However, this is not recommended as we aim to separate jobs by their goal. This way in case any of the steps fail, we can retry the job and
Additional context N/A
cc @mokagio as you recently worked and improved the jobs run in Buildkite, we'd love to hear your insights about this potential cache strategies. Thanks 🙇 !
Thank you for posting this @fluiddot 🙇♂️
Definitely speeding up the builds is high up on my mind when working on the CircleCI to Buildkite migration. I have a few ideas that I'll document here or try out directly on PR.
Closing since this was already addressed 🚀. For future improvements, we could create a new issue.